Professional Documents
Culture Documents
569
569
TABLE OF CONTENT
S
ABSTRACT.....................................................................................................................i
TABLE OF CONTENTS..................................................................................................ii
LIST OF FIGURES.......................................................................................................iv
CHAPTER 1....................................................................................................................1
INTRODUCTION..........................................................................................................1
1.1 What is Python........................................................................................................2
1.2 Advantages of Python.............................................................................................2
1.3 Disadvantages of Python.........................................................................................3
1.4 Install Python on Windows Step by Step................................................................4
1.5 Installation of Python..............................................................................................6
1.6 Verify the Python Installation..................................................................................7
1.7 Check how the Python IDLE works........................................................................8
1.8 What is VS Code:....................................................................................................8
1.9 Steps to Install Visual Studio Code on Windows....................................................9
CHAPTER 2..................................................................................................................13
SYSTEM ANALYSIS...................................................................................................13
2.1 Existing System.....................................................................................................13
2.1.1 Drawbacks......................................................................................................13
2.2 Proposed System...................................................................................................14
2.2.1 Advantages.....................................................................................................15
2.3 Modules.................................................................................................................16
2.4 Feasibility Study....................................................................................................18
2.4.1 Technical Feasibility.......................................................................................18
2.4.2 Financial Feasibility.......................................................................................18
2.4.3 Market Feasibility...........................................................................................18
2.4.4 Operational Feasibility...................................................................................19
2.4.5 Legal and Ethical Feasibility..........................................................................19
CHAPTER 3..................................................................................................................20
REQUIREMENT ANALYSIS.....................................................................................20
3.1 Software Requirements.........................................................................................20
3.2 Functional Requirements.......................................................................................20
3.3 Hardware Requirements........................................................................................20
CHAPTER 4..................................................................................................................21
SOFTWARE DESIGN..................................................................................................21
4.1 Data Flow Diagram...............................................................................................21
4.2 Use Case diagram..................................................................................................23
4.3 Class Diagram.......................................................................................................24
4.4 Sequence Diagram.................................................................................................25
4.5 Activity Diagram...................................................................................................26
4.6 State Machine Diagram.........................................................................................27
CHAPTER 5..................................................................................................................28
IMPLEMENTATION...................................................................................................28
5.1 Image processing...................................................................................................28
5.2 Source Code..........................................................................................................34
5.3.1 drowsiness-detection.ipynb............................................................................34
5.3.2 main.py...........................................................................................................35
5.3.3 activate.bat......................................................................................................37
5.3.4 deactivate.bat..................................................................................................38
CHAPTER 6..................................................................................................................39
TESTING.......................................................................................................................39
6.1 System Test............................................................................................................39
6.2 Types of Tests........................................................................................................39
6.2.1 Unit testing.....................................................................................................39
6.2.2 Integrated Testing...........................................................................................39
6.2.3 Functional Testing..........................................................................................40
6.2.4 White Box Testing..........................................................................................40
6.2.5 Black Box Testing...........................................................................................41
6.3 Test strategy and approach....................................................................................41
6.3.1 Test objectives................................................................................................41
6.3.2 Features to be tested.......................................................................................41
6.4 Acceptance Testing................................................................................................41
CHAPTER 7..................................................................................................................42
OUTPUT........................................................................................................................42
CHAPTER 8..................................................................................................................46
CONCLUSION & FUTURE ENHANCEMENT......................................................46
BIBLIOGRAPHY.........................................................................................................48
LIST OF FIGURES
CHAPTER 1
INTRODUCTION
In today's fast-paced world, long-distance driving has become a routine aspect
of modern life, whether for business or leisure. However, with the convenience of travel
comes the inherent risk of driver fatigue, a leading cause of road accidents worldwide.
According to experts, drowsy driving contributes to a significant portion of serious
motorway accidents, surpassing even the dangers posed by drink-driving. Recognizing
the urgency of addressing this issue, automotive manufacturers and researchers have
been exploring innovative solutions to detect and mitigate driver drowsiness in real-
time.
The Driver Drowsiness Detection System Using Image Processing project aims
to contribute to this vital area of road safety by harnessing the power of image
processing techniques to monitor driver behavior and identify signs of fatigue. Inspired
by existing technologies like Attention Assist found in modern vehicles, this project
seeks to develop a robust and efficient system capable of accurately detecting
drowsiness and alerting drivers before potential accidents occur.
The rationale behind this project is rooted in the understanding that human
factors play a crucial role in road safety. While technological advancements have led to
significant improvements in vehicle safety features, such as airbags and anti-lock
braking systems, the human element remains a critical factor in preventing accidents.
Recognizing the limitations of solely relying on drivers to recognize their own fatigue,
the Driver Drowsiness Detection System aims to provide an additional layer of
protection by actively monitoring driver behavior and providing timely warnings when
signs of drowsiness are detected.
The significance of this project lies in its potential to save lives and reduce the
number of accidents caused by driver fatigue. By leveraging image processing
techniques and advanced algorithms, the Driver Drowsiness Detection System
represents a proactive step towards enhancing road safety and protecting drivers and
passengers from the dangers of drowsy driving. Through continuous research and
development, this project aims to contribute to the ongoing efforts to create safer and
more sustainable transportation systems for the future.
These are some facts about Python. Python is present most widely used multi-
purpose and high-level programming language.it allows programming in Object-
Oriented and Procedural paradigms. Python programs commonly are smaller than other
programming languages like Java. Programmers have to write relatively less and
indentation requirement of the language, makes them readable all the time. Python
language is being used by the almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python language is
huge collection of standard libraries it can be used for the following
Machine Learning.
1) Less Coding
When performing the same task in other languages, all tasks performed in
Python require less coding. Python is also excellent standard library support, so there is
no need to search for third-party libraries to get the job done. This is the main reason
why many people recommend that beginners learn Python.
2) Affordable
1. Speed Limitations
We have seen that Python code executes line by line. But because Python is
interpreted, its execution speed is very slow. This is not a problem, unless speed is the
focus of the project work. In other words, the speed is necessary, and the benefits
Python provides are enough to distract us from its speed limitations.
3. Design Restrictions
Python is a dynamically typed language. This means that you do not need to
declare variable types when writing code content. It uses to duck typing. what is that?
Well, it just means that if it looks like a duck, then it must be a duck. This is easy for
programmers in the coding process, and it will generate runtime errors.
Compared to the most widely used technologies such as JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python's database access
layer is a bit underdeveloped and it is used less in large companies.
5. Simple
We are not kidding. Python's simplicity is indeed a problem. I don't do Java, I'm
more of a Python person. To me, its syntax is so simple that Java code seems
unnecessary.
Step-1: Go to the official site and use Google Chrome or any other web browser to
download and install Python. Or click on the following link:
https://www.python.org/
Now check the latest and correct version of your operating system.
Step-3: You can select the yellow Download Python for Windows3.7.4 button, or you
can scroll download click the download of the corresponding version. Here, we are
downloading the latest version of Python for Window 3.7.4.
Step-4: Scroll down the page until you find the "File" option.
Step5: Here you will see different versions of Python and operating systems.
To download Windows 32bit Python, you can select the built-in Windows X86 Zip
file, WindowsX86 executable installer or WindowsX86 installer on the Web.
To download Windows 64-bit Python, you can select any option of the three
options. Zip File Embeddable Windows X866, Windows X8664 executable installer
or Windows X8664 installer based on the web.
Step-1: Goto Download and Open the downloaded python version to carry out the
installation process.
Step-2: Before you click on Install Now, make sure to put a tick on Add Python 3.7 to
Path
Step-3: Click on Install Now After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and
correctly installed Python. Now is the time to verify the installation.
Step-4: Let us test whether the python is correctly installed. Type python– V and press
Enter.
Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.
Step-5: Name the file and save as type should be Python files. Click on SAVE.
Visual Studio Code is the most popular code editor and the IDEs provided by
Microsoft for writing different programs and languages. It allows the users to develop
new code bases for their applications and allow them to successfully optimize them.
For its high popularity, individuals opt to Install Visual Studio Code on Windows over
any other IDE. Installation of Windows Visual Studio Code is not a difficult matter. The
Installation process starts with Downloading the Visual Studio Code EXE file to some
on-screen instructions.
We are going to list all the steps required to Install VS Code on Windows in a detailed
format.
Step 1: Visit the Official Website of the Visual Studio Code using any web browser like
Google Chrome, Microsoft Edge, etc.
Step 2: Press the “Download for Windows” button on the website to start the download
of the Visual Studio Code Application.
Step 3: When the download finishes, then the Visual Studio Code Icon appears in the
downloads folder.
Step 4: Click on the Installer icon to start the installation process of the Visual Studio
Code.
Step 5: After the Installer opens, it will ask you to accept the terms and conditions of
the Visual Studio Code. Click on I accept the agreement and then click the Next button.
Step 6: Choose the location data for running the Visual Studio Code. It will then ask
you to browse the location. Then click on the Next button.
Step 7: Then it will ask to begin the installation setup. Click on the Install button.
Step 8: After clicking on Install, it will take about 1 minute to install the Visual Studio
Code on your device.
Step 9: After the Installation setup for Visual Studio Code is finished, it will show a
window like this below. Tick the “Launch Visual Studio Code” checkbox and then click
Next.
Step 10: After the previous step, the Visual Studio Code window opens successfully.
Now you can create a new file in the Visual Studio Code window and choose a
language of yours to begin your programming journey!
CHAPTER 2
SYSTEM ANALYSIS
2.1 Existing System
2.1.1 Drawbacks
3. False Alarms: Due to the reliance on physiological signals, existing systems may
produce false alarms when drivers engage in non-fatigue-related behaviors that
mimic drowsiness. For instance, a driver may exhibit erratic steering behavior due
to road conditions or distractions, leading to a false drowsiness alert.
5. Limited Scope: Many existing systems focus solely on detecting drowsiness based
on physiological signals and may overlook other important indicators such as facial
expressions and body posture. This narrow focus may limit the system's
effectiveness in accurately identifying drowsiness in all drivers and situations.
The proposed system for the Driver Drowsiness Detection project aims to
overcome the limitations of existing systems by incorporating image processing
techniques to enhance the accuracy and reliability of drowsiness detection. The system
will utilize a combination of facial recognition, eye tracking, and other visual cues to
monitor driver behavior and identify signs of fatigue in real-time.
1. Facial Recognition: The system will analyze facial expressions, including changes
in facial muscle movements and eye closure patterns, to detect signs of drowsiness.
By leveraging facial recognition algorithms, the system can accurately identify
subtle changes indicative of fatigue, such as drooping eyelids and yawning.
2. Eye Tracking: Eye tracking technology will be employed to monitor the driver's
gaze patterns and detect instances of prolonged eye closure or erratic eye
movements, which are common indicators of drowsiness. By continuously tracking
the driver's eye movements, the system can detect deviations from normal behavior
and issue timely warnings when signs of fatigue are detected.
3. Other Visual Cues: In addition to facial recognition and eye tracking, the system
will also analyze other visual cues such as head position, body posture, and hand
movements to further enhance drowsiness detection accuracy. By integrating
multiple visual cues, the system can provide a more comprehensive assessment of
5. Real-time Feedback: Upon detecting signs of drowsiness, the system will provide
real-time feedback to the driver through visual and auditory alerts, prompting them
to take corrective action such as taking a break or switching drivers. Additionally,
the system may integrate with the vehicle's navigation system to suggest nearby
service areas where the driver can safely stop and rest.
2.2.1 Advantages
5. Integration with Navigation Systems: The proposed system may integrate with
the vehicle's navigation system to suggest nearby service areas where drivers can
safely stop and rest. This integration enhances the system's usability and encourages
drivers to prioritize their well-being during long journeys.
2.3 Modules
This module focuses on obtaining images or video feed from the onboard camera
or any other input source within the vehicle. The acquired images may contain various
environmental factors such as varying lighting conditions, occlusions, and noise.
Preprocessing techniques are employed to enhance the quality and clarity of the images,
which is crucial for accurate drowsiness detection. Common preprocessing steps
include noise reduction, contrast enhancement, and normalization. Additionally, image
registration techniques may be applied to compensate for any movement or vibration of
the camera. This module ensures that the input images are optimized for subsequent
processing stages.
In this module, relevant features are extracted from the preprocessed images to
characterize facial expressions and movements indicative of drowsiness. These features
may include eye closure duration, head pose, blink rate, and facial landmarks such as
mouth curvature. Feature extraction techniques like Haar cascades, HOG (Histogram of
Oriented Gradients), and deep learning-based methods such as Convolutional Neural
Networks (CNNs) are employed to capture discriminative information from the images.
Furthermore, feature selection algorithms are utilized to reduce dimensionality and
remove redundant or irrelevant features, improving computational efficiency and model
performance.
This module implements the core algorithm responsible for detecting driver drowsiness
based on the extracted features. Various machine learning and deep learning techniques
such as Support Vector Machines (SVM), Random Forests, Recurrent Neural Networks
(RNNs), or Long Short-Term Memory (LSTM) networks can be employed for
classification. The algorithm analyzes the feature vectors derived from the input images
in real-time and determines the driver's drowsiness state. Thresholding techniques or
probabilistic models may be utilized to establish criteria for identifying drowsiness,
triggering appropriate alerts or interventions when necessary.
The final module involves the development of a user-friendly interface to interact with
the drowsiness detection system and integration with the vehicle's alert system. The
user interface provides real-time feedback to the driver, displaying the current
drowsiness level and relevant alerts. Alerts may include auditory, visual, or haptic cues
to grab the driver's attention and prompt them to take corrective actions, such as taking
a break or pulling over safely. Integration with the vehicle's alert system ensures
seamless communication and coordination with existing safety mechanisms, enhancing
overall driver safety and reducing the risk of accidents due to drowsy driving.
2. Return on Investment (ROI): Assess the potential benefits of the system, such
as reduced accidents and associated costs (e.g., medical expenses, property
damage), improved driver safety, and potential revenue streams (e.g., licensing
the technology to automotive manufacturers).
CHAPTER 3
REQUIREMENT ANALYSIS
3.1 Software Requirements
For developing the application, the following are the
1. Software Requirements: Python, Visual Studio Code
2. Operating Systems supported: Windows 10 and above
CHAPTER 4
SOFTWARE DESIGN
4.1 Data Flow Diagram
A data flow diagram (DFD) is a visual representation of how data moves through a
system, using standardized symbols and notation. DFDs can be simple overviews or
complex representations with multiple levels. The most common and intuitive DFDs
are level 0 DFDs, also called context diagrams.
A DFD can include:
Level 0 DFD
Also called a context diagram, this is a basic overview of the system or process being
modeled. It shows the system as a single high-level process, with its relationship to
external entities.
Level 1 DFD
This represents the main functions of the system and how they interact with each other.
Level 2 DFD
This represents the processes within each function of the system and how they interact
with each other.
Level 3 DFD
This represents the data flow within each process and how the data is
transformed and stored
DFDs are commonly used in software engineering to model the structure and behavior
of systems, aiding in the understanding, analysis, and communication of
complex processes.
1. External Entities:
Driver: Represents the user of the system who interacts with the drowsiness detection
system.
2. Processes:
Image Processing: This process involves preprocessing the captured images or sensor
data to enhance quality and extract relevant features related to drowsiness detection.
Alert Generation: Upon detecting drowsiness, this process generates alerts to notify
the driver, typically through visual, auditory, or haptic feedback.
3. Data Stores:
Drowsiness Data: Represents the data related to drowsiness detection, including
preprocessed image data, extracted features, and detection results.
Alert Log: Stores records of alerts generated by the system, including timestamps and
details of the detected drowsiness events.
4. Data Flows:
Image/Sensor Data: Represents the flow of raw image data or sensor readings from
the camera/input sensor to the Image Processing process.
Alert Signal: Represents the signal generated by the Alert Generation process to notify
the driver of detected drowsiness.
A data flow diagram (DFD) is a graphical representation of the flow of data within a
system. It typically consists of processes, data stores, data flows, and external entities.
The processes represent the transformations or operations performed on the data, data
stores are repositories where data is stored, data flows depict the movement of data
between processes and data stores, and external entities are sources or destinations of
data that interact with the system.
UML Diagrams:
A Unified Modeling Language (UML) diagram for the driver drowsiness detection
system project can provide a visual representation of the system's structure and
behavior. Here's an explanation of the key components of a UML diagram for this
project:
Class diagrams are the blueprints of your system or subsystem. You can use class
diagrams to model the objects that make up the system, to display the relationships
between the objects, and to describe what those objects do and the services that they
provide. Class diagrams are useful in many stages of system design.class diagram is an
illustration of the relationships and source code dependencies among classes in the
Unified Modeling Language (UML). In this context, a class defines the methods and
class diagram is a type of diagram in the Unified Modeling Language (UML) that
represents the structure of a system by showing the classes of the system, their
attributes, methods, and the relationships among them.
In a class diagram:
Classes are depicted as rectangles with three compartments: the top compartment
contains the class name, the middle compartment contains the attributes (data members)
of the class, and the bottom compartment contains the methods (member functions) of
the class.
Dependencies represent a relationship where a change in one class may affect another
class. They are typically represented by a dashed line with an arrow indicating the
direction of the dependency.
Class diagrams are widely used in software development for designing object-oriented
systems, providing a visual representation of the system's structure and helping to
communicate design decisions among stakeholders.
camera, processing the image, analyzing for signs of drowsiness, and triggering an alert
if necessary.
A sequence diagram is a Unified Modeling Language (UML) diagram that illustrates
the order of messages sent between objects in an interaction. It uses parallel vertical
lines (lifelines) to represent objects or processes, and horizontal arrows to represent
messages exchanged between them. Sequence diagrams are used by software
developers and business professionals to: Understand requirements for a new system,
Document an existing process, Design new systems, Document how objects in an
existing system work, and Communicate how a business works.
Activity diagrams show the control flow from step to step, i.e., from activity to activity.
An activity shows a set of actions, the sequential or branching control flow, and values
that are produced or consumed by actions. Activity diagrams can also include elements
showing the flow of data between activities through one or more data stores.
Activity diagrams are often used in business process modeling. They can also describe
the steps in a use case diagram. An activity diagram is a type of behavioral diagram in
the Unified Modeling Language (UML) that represents the flow of control or the
sequence of activities in a system or process. It visually depicts the steps and actions
involved in completing a specific task or achieving a particular goal within a system.
In an activity diagram:
Activities are represented as rounded rectangles and are connected by arrows to show
the flow of control from one activity to another. Decision points, represented by
diamonds, indicate branching in the flow of control based on conditions or decisions.
Control flow arrows indicate the sequence in which activities are performed, including
loops, concurrent activities, and parallel flows. Swimlanes can be used to represent
different actors, roles, or system components involved in the activities. Activity
diagrams are used to model business processes, software workflows, and system
behaviors, helping to visualize and understand the dynamic aspects of a system or
process. They are particularly useful for capturing complex logic and interactions
between different components or actors within a system.
CHAPTER 5
IMPLEMENTATION
5.1 Image processing
Localization of Face: Since the Face is symmetric, we use a symmetry-based
approach. We found that it is enough to use a sub sampled, gray-scale version of the
image. A symmetry-value is then computed for every pixel-column in the reduced
image. If the image is represented as I(x, y) then the symmetry value for a pixel-column
is given by S(x) = ∑∑ [abs I ((x,y-w)-(x,y+w))]. S(x) is computed for X € [k,size-k]
where k is the maximum distance from the pixel-column that symmetry is measured,
and x size is the width of the image. The x corresponding to the lowest value of S(x) is
the center of the face.
EXPERIMENTAL SET-UP
Fig 0.23 The original image, the edges and the histogram of projected edges.
Location of Eyes: A raster scan algorithm is used for the exact location of the eyes
and extracts that vertical location of eyes.
Tracking of the eyes: We track the eye by looking for the darkest pixel in the
predicted region. In order to recover from tracking errors, we make sure that none
of the geometrical constraints are violated. If they are, we delocalize the eyes in
the next frame. To find the best match for the eye template, we initially center it at
the darkest pixel, and then perform a gradient descent in order to find a local
minimum.
Detection of Drowsiness: As the driver becomes more fatigued, we expect the eye-
blinks to last longer. We count the number of consecutive frames that the eyes are
closed in order to decide the condition of the driver. For this, we need a robust way to
determine if the eyes are open or closed; so we used a method that looks at the
horizontal histogram across the pupil.
EXPERIMENTAL SET-UP
The final system consists of a camera pointing at the driver. The camera is to be
mounted on the dashboard inside the vehicle. For the system we are developing, the
camera is stationary and will not adjust its position or zoom during operation. For
experimentation, we are using a webcam. The grabbed frames are represented in RGB-
space with 8-bit pixels (256 colors). We will not be using any specialized hardware for
image processing. Given below is the block diagram
The function of the system can be broadly divided into eye detection function,
comprising the first half of the preprocessing routine, and a drowsiness detection
function, comprising the second half.
After inputting a facial image, preprocessing is p e r f o r m e d to binaries the image
and remove noise, which makes it possible for the image to be accepted by the image
processor. The maximum width of the face is then detected so that the right and left
edges of the face can be identified. After that the vertical position of each eye is
detected independently within an area defined by the center line of the face width and
lines running through the outermost points of the face. On that basis, the area in which
each eye is present is determined. Once the areas of eye presence have been defined,
they can be updated by tracking the movement of the eyes. The degree of eye openness
is output simultaneously with the establishment or updating of the areas of eye
presence. That value is used in judging whether the eyes are open or closed and also in
judging whether the eyes have been detected correctly or not. If the system judges that
the eyes have not been detected correctly, the routine returns to the detection of the
entire face.
The following explains the eye detection procedure in the order of the processing
operations.
a) Preprocessing
The preprocessing operations include the binarization of a facial image to increase
the processing speed and conserve memory capacity and noise removal. The image
processor developed for this drowsiness warning system performs the expansion and
contraction operation on the white pixels and processing for noise removal is
performed on the small black pixels of the facial images.
After the binarization, the noise removal procedure involves an expansion processing
method combined with the use of a median filter. These preprocessing operations are
sufficient to support detection of the vertical positions of the eyes.
However, following identification of the eye positions, the size of the eyes must be
converted back to the original image format at the time the degree of eye openness is
output. To facilitate that, data contraction is performed in the latter stage of
preprocessing.
b) ce width detection
The maximum width of the driver’s face must be detected in order to determine the
lateral positions of the areas in which the eyes are present. Face width is detected by
judging the continuity of white pixels and the pattern of change in pixel number. On
that basis, the outer edges of the face are recognized and determined.
d) Eye tracking
A function for tracking the positions of the eye is an important capability for
achieving high-speed processing because it eliminates the need to process every
frame in order to detect each eye position from the entire facial image.
This function consists of a subordinate for updating the areas of eye presence and
recognizing when tracking becomes impossible. The basic concept of eye tracking is
to update the area of eye presence, in which an eye search is made in the following
frame, according to the central coordinates of the eye in the previous frame. The
updating process involves defining an arc of eye presence on the basis of the
coordinates (xk, yk) at the point of intersection of center lines running through the
Ferret’s diameter of the detected eye. The area thus becomes the area of eye presence
in which the system searches for the eye in the image data of the next frame.
This process of using information on eye position to define the eye position for
obtaining the next facial image data makes it possible to track the position of the eye.
As it is clear from this description, the size of the area of eye position changes. If the
eyes are tracked correctly, their degree of openness will always vary within certain
specified range for each individual driver. Consequently, if the value found by the
system falls outside the range, it judges that the eyes are not being tracked correctly.
The process of detecting the position of each eye from the entire facial image is then
executed once more.
A low value for M (a1, a2) corresponds to a good match. The template is matched
across the predicted eye-region, and the best match is reported. We track the eye by
looking for the darkest pixel in the predicted region
labels = model.classes
label_annotator = sv.LabelAnnotator()
bounding_box_annotator = sv.BoxAnnotator()
cap = cv2.VideoCapture(0)
pygame.mixer.init()
alert_sound = pygame.mixer.Sound("warning.mp3")
while True:
ret, frame = cap.read()
if not ret:
break
detections = sv.Detections.from_roboflow(result)
annotated_frame = bounding_box_annotator.annotate(scene=frame,
detections=detections)
annotated_frame = label_annotator.annotate(scene=annotated_frame,
detections=detections, labels=labels)
cap.release()
cv2.destroyAllWindows()
5.3.2 main.py
from roboflow import Roboflow
import supervision as sv
import cv2
import pygame
labels = model.classes
label_annotator = sv.LabelAnnotator()
bounding_box_annotator = sv.BoxAnnotator()
cap = cv2.VideoCapture(0)
pygame.mixer.init()
alert_sound = pygame.mixer.Sound("warning.mp3")
while True:
ret, frame = cap.read()
if not ret:
break
detections = sv.Detections.from_roboflow(result)
annotated_frame = bounding_box_annotator.annotate(scene=frame,
detections=detections)
annotated_frame = label_annotator.annotate(scene=annotated_frame,
detections=detections, labels=labels)
cap.release()
cv2.destroyAllWindows()
5.3.3 activate.bat
@echo off
rem This file is UTF-8 encoded, so we need to update the current code page while
executing it
for /f "tokens=2 delims=:." %%a in ('"%SystemRoot%\System32\chcp.com"') do (
set _OLD_CODEPAGE=%%a
)
if defined _OLD_CODEPAGE (
"%SystemRoot%\System32\chcp.com" 65001 > nul
)
set VIRTUAL_ENV=C:\Users\anush\Dropbox\PC\Downloads\Drowsiness-Detection-
main(1)\.venv
set _OLD_VIRTUAL_PROMPT=%PROMPT%
set PROMPT=(.venv) %PROMPT%
set PATH=%VIRTUAL_ENV%\Scripts;%PATH%
set VIRTUAL_ENV_PROMPT=(.venv)
:END
if defined _OLD_CODEPAGE (
"%SystemRoot%\System32\chcp.com" %_OLD_CODEPAGE% > nul
set _OLD_CODEPAGE=
)
5.3.4 deactivate.bat
@echo off
if defined _OLD_VIRTUAL_PROMPT (
set "PROMPT=%_OLD_VIRTUAL_PROMPT%"
)
set _OLD_VIRTUAL_PROMPT=
if defined _OLD_VIRTUAL_PYTHONHOME (
set "PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME%"
set _OLD_VIRTUAL_PYTHONHOME=
)
if defined _OLD_VIRTUAL_PATH (
set "PATH=%_OLD_VIRTUAL_PATH%"
)
set _OLD_VIRTUAL_PATH=
set VIRTUAL_ENV=
set VIRTUAL_ENV_PROMPT=
:END
CHAPTER 6
TESTING
6.1 System Test
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of tests. Each test type addresses a
specific testing requirement.
6.2Types of Tests
6.2.1 Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level and
test a specific business process, application, and/or system configuration. Unit tests
ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.
6.2.2 Integrated Testing
Integration tests are designed to test integrated software components to
determine if they run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although
the components were individually satisfaction, as shown by successfully unit testing,
the combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the combination of
components.
CHAPTER 7
OUTPUT
CHAPTER 8
The driver drowsiness detection system project represents a critical endeavor aimed at enhancing
road safety by mitigating the risks associated with drowsy driving. Through the integration of
advanced image processing techniques, machine learning algorithms, and real-time alert systems,
the project seeks to detect early signs of driver fatigue and prompt timely interventions to prevent
potential accidents.
Throughout the project, we have delved into various aspects, including technical
feasibility, financial considerations, market analysis, operational aspects, and legal and
ethical implications. The feasibility study confirmed the viability of the project,
highlighting the availability of necessary technology, market demand, and potential
return on investment. Additionally, operational aspects such as integration with existing
systems and user acceptance were addressed to ensure the system's effectiveness and
practicality.
The development process involved the implementation of key modules,
including image acquisition and preprocessing, feature extraction and selection,
drowsiness detection algorithms, and integration with user interfaces and alert systems.
These modules were meticulously designed and implemented to ensure robustness,
accuracy, and real-time performance, essential for effective drowsiness detection in
dynamic driving scenarios.
Furthermore, the project emphasized the importance of adhering to legal and
ethical standards, particularly regarding privacy, data protection, and liability
considerations. By addressing these concerns, the system aims to uphold the highest
standards of safety, reliability, and ethical conduct.
CHAPTER 9
BIBLIOGRAPHY
[1] Qiang Ji, Zhiwei Zhu and Peilin Lan ―IEEE transactions on Vehicular
Technology Real Time Non-intrusive Monitoring and Prediction of Driver
Fatigue, vol. 53, no. 4, July 2004.
[2] N.G. Narole, and G.H. Raisoni., ―IJCSNS A Neuro-genetic System Design for
Monitoring Driver’s Fatigue. vol. 9. No. 3, March 2009.
[3] Wei-niin Huang & Robert Mariani, ―Face Detecion and precise Eyes Location
―, Proceeding of the International Conference on Patteni
Recognization(ICPP‖OO),Vol.4,2000
[4] Gonzalez, Rafel C. and Woods, Richard E. ―Digital Image Processing‖,
Prentice Hall: Upper Saddle River, N.J., 2002.
[5] Perez, Claudio A. et al. ―Face and Eye Tracking Algorithm Based on Digital
Image Processing‖,IEEE System, Man and Cybernetics 2001 Conference, vol. 2
(2001), pp 1178-1188.
[6] Singh, Sarbjit and Papanikolopoulos, N.P. ―Monitoring Driver Fatigue Using
Facial Analysis Techniques‖, IEEE Intelligent Transport System Proceedings
(1999), pp 314-318.
[7] Ueno H., Kanda, M. and Tsukino, M. ―Development of Drowsiness
DetectionSystem‖,IEEE Vehicle Navigation and Infor mation Systems
Conference Proceedings,(1994), ppA1-3,15-20.
[8] Weirwille, W.W. (1994). ―Overview of Research on Driver Drowsiness
Definition and Driver Drowsiness Detection,‖ 14th International Technical
Conference on Enhanced Safety of Vehicles, pp 23-26.
[9] R. Brunelli, Template Matching Techniques in Computer Vision: Theory and
Practice, Wiley, ISBN 978-0-470-51706-2, 2009
[10] J. Cox, J. Ghosn, P.N. Yianilos, ―Feature-Based Recognition Using
Mixture-Distance, ―NEC Research Institute,Technical Report 95 - 09, 1995.
[11] I. Craw, H. Ellis and J.R. Lishman, ―Automatic Extraction of Face-