Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 57

“Emotion Classification using

Machine Learning & Neural Network”

Major Project Report

Submitted in partial fulfilment of the requirements for the award of the degree

of

Bachelor of Technology
in
Information Technology

by

Rohil Singh Nandan Kumar Ayush Jain Puneet


(01620703117) (01120703117) (00320703117) (01420703117)

Guided by:
Mr. Gulzar Ahmed
(Asst. Professor)

DEPARTMENT OF
INFORMATION TECHNOLOGY
CH. BRAHM PRAKASH GOVT. ENGINEERING COLLEGE
LAL BAGH JAFFARPUR DELHI-110073

June-July 2021
CANDIDATE’S DECLARATION

It is hereby certified that the work which is being presented in the B. Tech Major
Project Report entitled "Emotion Classification using Machine Learning and
Neural Network" in partial fulfilment of the requirements for the award of the
degree of Bachelor of Technology (Information Technology) and submitted in the
Department of Information Technology of Ch. Brahm Prakash Govt engineering
college, Delhi (Affiliated to Guru Gobind Singh Indraprastha University, Delhi) is
an authentic record of our own work carried out under the guidance of Mr. Gulzar
Ahmed, Asst. Professor.
The matter presented in the B. Tech Major Project Report has not been submitted
by us for the award of any other degree or diploma of any Institute.

Rohil Singh Nandan Kumar Ayush Jain Puneet


(01620703117) (01120703117) (00320703117) (01420703117)
CERTIFICATE

This is to certify that the work which is being presented in the B. Tech Major
Project Report entitled "Emotion Classification using Machine Learning and
Neural Network" in partial fulfilment of the requirements for the award of the
degree of Bachelor of Technology(Information Technology) and submitted in the
Department of Information Technology of Ch. Brahm Prakash Govt engineering
college, Delhi (Affiliated to Guru Gobind Singh Indraprastha University, Delhi) is
a bonafide project work carried out by them under my supervision and guidance.
Their work has reached the standard of fulfilling the requirements of regulations
relating to degree. The project report is an original piece of work and embodies the
findings made by the students themselves.
The results presented have not been submitted in part or in full to any other
University/Institute for the award of any degree or diploma.

Rohil Singh Nandan Kumar Ayush Jain Puneet


(01620703117) (01120703117) (00320703117) (01420703117)

This is to certify that the above statement made by the candidate is correct to the best
of my knowledge. They are permitted to appear in the External Major Project
Examination

Mr. Gulzar Ahmed Mr. Sanjeev Kumar


(Asst. Professor) H.O.D (I.T)

The B. Tech Minor Project Viva-Voce Examination of Ayush Jain


(Enrollment No: 00320703117), has been held on ……………………………….

(Signature of External Examiner) (Signature of External Examiner)


ACKNOWLEDGEMENT

At the very outset, we would like to record our heartfelt gratitude to our respected
teacher and mentor Mr. Gulzar Ahmed, Asst. Professor, Department of
Information Technology for his valuable guidance and suggestion throughout my
project work.

We would like to extend our sincere thanks to Head of the Department, Mr.
Sanjeev Kumar for his time to time suggestions to complete our project work.
I shall be failing in my duty if I do not express thanks to all the faculty members
and staff of Department of Information Technology for providing us the facilities
to carry out our project work

Rohil Singh Nandan Kumar Ayush Jain Puneet


(01620703117) (01120703117) (00320703117) (01420703117)
ABSTRACT

Modern human life is full of stress and challenges and in this changing time, humans seem to
lose their connection with their own emotions. Across the world, people find it difficult to
express or understand their emotions and study of these emotions is becoming important and
necessary. Many techniques of automatic human emotion recognition have been proposed
and one of them is using psychological signals. Among various signals that change in
different emotional states of our body, for our major project we have studied and analysed
two such signals that are skin conductance using GSR sensor and skin temperature using
Temperature Sensor. Also we have considered the external factors room temperature and
humidity that affect the readings of the emotion detecting sensors. These sensors are
controlled and connected using Arduino. Here GSR senses the skin conductance while the
temperature is recorded using the Temperature sensor and data is continuously sent to
Arduino which records and stores the data with a timestamp. And Temperature sensor
measures room temperature and humidity. The data is then analysed and visualized. Using
different classification algorithms, the dataset processed by sensors is being labelled under
different emotions and for our major project, we have classified the data as neutral, sad,
angry and happy. On the basis of classification and model training using two different
techniques live prediction has been achieved that successfully defines the values for four
emotions namely happy, neutral, angry and sad. Our model interacts with human emotions at
a subconscious level which is difficult to manipulate and hence help an individual to
understand or express his or her emotional state.
TABLE OF CONTENT

TITLE PAGE NO.


Candidate’s Declaration I
Certificate II
Acknowledgement III
Abstract IV
Table of Content V
List of Figures VI

1. INTRODUCTION
1.1 Context 1
1.2 Domain Knowledge 2
2. SYSTEM REQUIREMENT
2.1 Software Requirements 7
2.2 Hardware Requirements 7
3. SYSTEM ANALYSIS
3.1 SRS 8
3.2 Software Details 9
3.3 Hardware Details 13
4. IMPLEMENTATION
4.1 Data Collection 18
4.2 Data Analysis 21
4.3 Classification 24
2. RESULT AND CONCLUSION 32
3. REFERENCES 35
4. APPENDEX 36
LIST OF FIGURES

Sno. Name of the Figure Page number


1. Python 9
2. pyQt 11
3. Arduino 13
4. Arduino Board 14
5. GSR sensor 15
6. Skin Temperature sensor 17
7. Humidity and Room Temp sensor 18
8. Implementation flow Chart 19
9. Hardware Implementation 20
10. Status vs Skin Temp graph 21
11. GSR vs Status graph 22
12. Room Temp vs Status graph 22
13. Humidity vs Status graph 22
14. Example of KNN scatter plot 23
15. Neural Network 28
16. Neural Network Layer architecture 28
17. Output of Neural Network 32
18. Output of Neural Network 33
19. Output of KNN 33
20. Output of KNN 34

1
1.INTRODUCTION

1.1 CONTEXT

The project focuses on live prediction of human emotions and to help an individual in
understanding his response to the situation in a better way with help of trained models that
are using machine learning and neural network for classification and prediction.

Modern human life is full of stress and challenges and in this changing time, humans seem to
lose their connection with their own emotions. Across the world, people find it difficult to
express or understand their emotions and study of these emotions is becoming important and
necessary. Many techniques of automatic human emotion recognition have been proposed
and one of them is using psychological signals. Among various signals that change in
different emotional states of our body, for our major project we have studied and analyzed
two such signals that are skin conductance using GSR sensor and skin temperature using
Temperature Sensor. Also we have considered the external factors room temperature and
humidity that affect the readings of the emotion detecting sensors. These sensors are
controlled and connected using Arduino. Here GSR senses the skin conductance while the
temperature is recorded using the Temperature sensor and data is continuously sent to
Arduino which records and stores the data with a timestamp. And Temperature sensor
measures room temperature and humidity. The data is then analysed and visualized. Using
different classification algorithms, the dataset processed by sensors is being labeled under
different emotions and for our major project, we have classified the data as neutral, sad, angry
and happy. On the basis of classification and model training using two different techniques
live prediction has been achieved that successfully defines the values for four emotions
namely happy, neutral, angry and sad. Our model interacts with human emotions at a
subconscious level which is difficult to manipulate and hence help an individual to understand
or express his or her emotional state.

2
1.2 DOMAIN KNOWLEDGE

1.2.1 PYTHON

Python is an interpreted, high-level, general-purpose programming language. Created


by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code
readability with its notable use of significant whitespace. Its language constructs and object-
oriented approach aim to help programmers write clear, logical code for small and large-scale
projects.

Python is dynamically typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly, procedural), object-oriented, and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library.

Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0,
released in 2000, introduced features like list comprehensions and a garbage collection system
capable of collecting reference cycles. Python 3.0, released in 2008, was a major revision of
the language that is not completely backward-compatible, and much Python 2 code does not
run unmodified on Python 3.

The Python 2 language was officially discontinued in 2020 (first planned for 2015), and
"Python 2.7.18 is the last Python 2.7 release and therefore the last Python 2 release." No more
security patches or other improvements will be released for it. With Python 2's end-of-life,
only Python 3.5.and later are supported.

Python interpreters are available for many operating systems. A global community of


programmers develops and maintains CPython, an open source reference implementation.
A non-profit organization, the Python Software Foundation, manages and directs resources for
Python and CPython development.

3
Python's large standard library, commonly cited as one of its greatest strengths, provides tools
suited to many tasks. For Internet-facing applications, many standard formats and protocols
such as MIME and HTTP are supported. It includes modules for creating graphical user
interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic
with arbitrary-precision decimals, manipulating regular expressions, and unit testing.

Some parts of the standard library are covered by specifications (for example, the Web Server
Gateway Interface (WSGI) implementation, but most modules are not. They are specified by
their code, internal documentation, and test suites. However, because most of the standard
library is cross-platform Python code, only a few modules need altering or rewriting for
variant implementation.

Python's name is derived from the British comedy group Monty Python, whom Python creator
Guido van Rossum enjoyed while developing the language. Monty Python references appear
frequently in Python code and culture; for example, the metasyntactic variables often used in
Python literature are spam and eggs instead of the traditional foo and bar. The official
Python documentation also contains various references to Monty Python routines.

The prefix Py- is used to show that something is related to Python. Examples of the use of this
prefix in names of Python applications or libraries include Pygame, a binding of SDL to
Python (commonly used to create games); PyQt and PyGTK, which bind Qt and GTK to
Python respectively; and PyPy, a Python implementation originally written in Python.

4
1.2.2 pyQt

PyQt is a Python binding of the cross-platform GUI toolkit Qt, implemented as a Python plug-


in. PyQt is free software developed by the British firm Riverbank Computing. It is available
under similar terms to Qt versions older than 4.5; this means a variety of licenses
including GNU General Public License (GPL) and commercial license, but not the GNU
Lesser General Public License (LGPL).[3] PyQt supports Microsoft Windows as well as
various flavours of UNIX, including Linux and MacOS (or Darwin).

PyQt implements around 440 classes and over 6,000 functions and methods including:

 a substantial set of GUI widgets


 classes for accessing SQL databases (ODBC, MySQL, PostgreSQL, Oracle, SQLite)
 QScintilla, Scintilla-based rich text editor widget
 data aware widgets that are automatically populated from a database
 an XML parser
 SVG support
 classes for embedding ActiveX controls on Windows (only in commercial version)

To automatically generate these bindings, Phil Thompson developed the tool SIP, which is
also used in other projects.

In August 2009, Nokia, the then owners of the Qt toolkit, released PySide, providing similar
functionality, but under the LGPL, after failing to reach an agreement with Riverbank
Computing to change its licensing terms to include LGPL as an alternative license.

PyQt4 contains the following Python modules.

 The QtCore module contains the core non-GUI classes, including the event loop and Qt's
signal and slot mechanism. It also includes platform independent abstractions for Unicode,
threads, mapped files, shared memory, regular expressions, and user and application
settings.
 The QtGui module contains the majority of the GUI classes. These include a number of
table, tree and list classes based on the model–view–controller design pattern. Also provided

5
is a sophisticated 2D canvas widget capable of storing thousands of items including
ordinary widgets.
 The QtNetwork module contains classes for writing UDP and TCP clients and servers. It
includes classes that implement FTP and HTTP clients and support DNS lookups. Network
events are integrated with the event loop making it very easy to develop networked
applications.
 The QtOpenGL module contains classes that enable the use of OpenGL in
rendering 3D graphics in PyQt applications.
 The QtSql module contains classes that integrate with open-source and proprietary SQL
databases. It includes editable data models for database tables that can be used with GUI
classes. It also includes an implementation of SQLite.
 The QtSvg module contains classes for displaying the contents of SVG files. It supports
the static features of SVG 1.2 Tiny.
 The QtXml module implements SAX and DOM interfaces to Qt's XML parser.
 The QtMultimedia module implements low-level multimedia functionality. Application
developers would normally use the phonon module.
 The QtDesigner module contains classes that allow Qt Designer to be extended using
PyQt.
 The Qt module consolidates the classes contained in all of the modules described above
into a single module. This has the advantage that you don't have to worry about which
underlying module contains a particular class. It has the disadvantage that it loads the
whole of the Qt framework, thereby increasing the memory footprint of an application.
Whether you use this consolidated module, or the individual component modules is down
to personal taste.
 The uic module implements support for handling the XML files created by Qt Designer
that describe the whole or part of a graphical user interface. It includes classes that load an
XML file and render it directly, and classes that generate Python code from an XML file
for later execution.[10]

PyQt5 contains the following Python modules:

 QtQml Module
 QtQtuick Module
 QtCore Module
 QtGui Module

6
 QtPrintSupport Module
 QtWidgets Module
 QGLContext Module
 QGLFormat Module
 QGLWidget Module
 QtWebKit Module
 QtWebKitWidgets Module

1.2.3 C for EMBEDDED SYSTEMS

Embedded C is a set of language extensions for the C programming language by the C


Standards Committee to address commonality issues that exist between C extensions for
different embedded systems.

Embedded C programming typically requires nonstandard extensions to the C language in


order to support enhanced microprocessor features such as fixed-point arithmetic, multiple
distinct memory banks, and basic I/O operations.

In 2008, the C Standards Committee extended the C language to address such capabilities by
providing a common standard for all implementations to adhere to. It includes a number of
features not available in normal C, such as fixed-point arithmetic, named address spaces and

basic I/O hardware addressing. Embedded C uses most of the syntax and semantics of
standard C, e.g., main() function, variable definition, datatype declaration, conditional
statements (if, switch case), loops (while, for), functions, arrays and strings, structures and
union, bit operations, macros, etc.

7
2. SYSTEM REQUIREMENTS

2.1SOFTWARE REQUIREMENTS

Operating System : Windows 8

Software : pyqt and Python IDE

Language : Python

Documentation Tool : Google Docs

2.2HARDWARE REQUIREMENTS

Board : Arduino

Sensors : GSR(Galvanic Skin Reasponse)

Skin Temperature Sensor

(DS18B20) , Room

Temperature sensor and

Humidity sensor.

8
3. SYSTEM ANALYSIS

3.1 Software Requirement Specification(SRS)

3.1.1 Purpose

This document specifies the requirement set for developing an emotion classification model
using the GSR sensor, Skin Temperature Sensor, Room Temperature Sensor, Humidity
Sensor, Arduino and python IDE and pyQt software. It is designed to enhance the user
experience ‘on to go’. The intended audience for this document is the development team and
end-users of the product. This SRS document covers the entire project at this stage of
development.

3.1.2 Scope

The objective of this project is to benefit its user by analyzing emotions and classifying them
for better understanding. Other techniques such as face recognition or voice recognition can
be manipulated but it is difficult to manipulate psychological signals sensed by GSR and
Temperature Sensor as they interact with the subconscious emotional state of humans. Skin
temperature and skin conductance also depends on external factors therefore by continuously
measuring external factors along with internal factors accuracy of live prediction has been
increased.

3.1.3 Product Perspective

With the increasing scope of Artificial Intelligence, the model can be fit in various sectors
and revolutionize the industry. For a rich set of applications including human-robot
interaction, computer-aided tutoring, emotion aware interactive games, neuromarketing,
socially intelligent software apps, computers should consider the emotions of their human
conversation partners. With constant monitoring of psychological signals, a person himself
can gain awareness on his or her emotional state. The sensors can be useful for doctors to
read and analyse patients’ emotional health.

9
3.2Software Details

3.2.1Introduction to Python

Figure 1. Python

Python is an interpreted, high-level, general-purpose programming language. Created


by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code
readability with its notable use of significant whitespace. Its language constructs and object-
oriented approach aim to help programmers write clear, logical code for small and large-scale
projects.

Python is dynamically typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly, procedural), object-oriented, and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library.

Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0,
released in 2000, introduced features like list comprehensions and a garbage collection system
capable of collecting reference cycles. Python 3.0, released in 2008, was a major revision of
the language that is not completely backward-compatible, and much Python 2 code does not
run unmodified on Python 3.

The Python 2 language was officially discontinued in 2020 (first planned for 2015), and
"Python 2.7.18 is the last Python 2.7 release and therefore the last Python 2 release." No more
security patches or other improvements will be released for it. With Python 2's end-of-life,
only Python 3.5.and later are supported.

10
Python interpreters are available for many operating systems. A global community of
programmers develops and maintains CPython, an open source reference implementation.
A non-profit organization, the Python Software Foundation, manages and directs resources for
Python and CPython development.

Python's large standard library, commonly cited as one of its greatest strengths, provides tools
suited to many tasks. For Internet-facing applications, many standard formats and protocols
such as MIME and HTTP are supported. It includes modules for creating graphical user
interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic
with arbitrary-precision decimals, manipulating regular expressions, and unit testing.

Some parts of the standard library are covered by specifications (for example, the Web Server
Gateway Interface (WSGI) implementation, but most modules are not. They are specified by
their code, internal documentation, and test suites. However, because most of the standard
library is cross-platform Python code, only a few modules need altering or rewriting for
variant implementation.

Python's name is derived from the British comedy group Monty Python, whom Python creator
Guido van Rossum enjoyed while developing the language. Monty Python references appear
frequently in Python code and culture; for example, the metasyntactic variables often used in
Python literature are spam and eggs instead of the traditional foo and bar. The official
Python documentation also contains various references to Monty Python routines.

The prefix Py- is used to show that something is related to Python. Examples of the use of this
prefix in names of Python applications or libraries include Pygame, a binding of SDL to Python
(commonly used to create games); PyQt and PyGTK, which bind Qt and GTK to Python
respectively; and PyPy, a Python implementation originally written in Python.

11
3.2.2pyQt

Figure 2. pyQt

PyQt is a Python binding of the cross-platform GUI toolkit Qt, implemented as a Python plug-


in. PyQt is free software developed by the British firm Riverbank Computing. It is available
under similar terms to Qt versions older than 4.5; this means a variety of licenses
including GNU General Public License (GPL) and commercial license, but not the GNU
Lesser General Public License (LGPL).[3] PyQt supports Microsoft Windows as well as
various flavours of UNIX, including Linux and MacOS (or Darwin).

PyQt implements around 440 classes and over 6,000 functions and methods including:

 a substantial set of GUI widgets


 classes for accessing SQL databases (ODBC, MySQL, PostgreSQL, Oracle, SQLite)
 QScintilla, Scintilla-based rich text editor widget
 data aware widgets that are automatically populated from a database
 an XML parser
 SVG support
 classes for embedding ActiveX controls on Windows (only in commercial version)

To automatically generate these bindings, Phil Thompson developed the tool SIP, which is
also used in other projects.

In August 2009, Nokia, the then owners of the Qt toolkit, released PySide, providing similar
functionality, but under the LGPL, after failing to reach an agreement with Riverbank
Computing to change its licensing terms to include LGPL as an alternative license.

12
PyQt4 contains the following Python modules.

 The QtCore module contains the core non-GUI classes, including the event loop and Qt's
signal and slot mechanism. It also includes platform independent abstractions for Unicode,
threads, mapped files, shared memory, regular expressions, and user and application
settings.
 The QtGui module contains the majority of the GUI classes. These include a number of
table, tree and list classes based on the model–view–controller design pattern. Also provided
is a sophisticated 2D canvas widget capable of storing thousands of items including
ordinary widgets.
 The QtNetwork module contains classes for writing UDP and TCP clients and servers. It
includes classes that implement FTP and HTTP clients and support DNS lookups. Network
events are integrated with the event loop making it very easy to develop networked
applications.
 The QtOpenGL module contains classes that enable the use of OpenGL in
rendering 3D graphics in PyQt applications.
 The QtSql module contains classes that integrate with open-source and proprietary SQL
databases. It includes editable data models for database tables that can be used with GUI
classes. It also includes an implementation of SQLite.
 The QtSvg module contains classes for displaying the contents of SVG files. It supports
the static features of SVG 1.2 Tiny.
 The QtXml module implements SAX and DOM interfaces to Qt's XML parser.
 The QtMultimedia module implements low-level multimedia functionality. Application
developers would normally use the phonon module.
 The QtDesigner module contains classes that allow Qt Designer to be extended using
PyQt.
 The Qt module consolidates the classes contained in all of the modules described above
into a single module. This has the advantage that you don't have to worry about which
underlying module contains a particular class. It has the disadvantage that it loads the
whole of the Qt framework, thereby increasing the memory footprint of an application.
Whether you use this consolidated module, or the individual component modules is down
to personal taste.

13
 The uic module implements support for handling the XML files created by Qt Designer
that describe the whole or part of a graphical user interface. It includes classes that load an
XML file and render it directly, and classes that generate Python code from an XML file
for later execution.[10]

PyQt5 contains the following Python modules:

 QtQml Module
 QtQtuick Module
 QtCore Module
 QtGui Module
 QtPrintSupport Module
 QtWidgets Module
 QGLContext Module
 QGLFormat Module
 QGLWidget Module
 QtWebKit Module
 QtWebKitWidgets Module

3.3Hardware Details

3.3.1Arduino

Figure 3. Arduino

14
Arduino is a prototype platform (open-source) based on easy-to-use hardware and software.
It consists of a circuit board, which can be programed (referred to as a microcontroller) and
a ready-made software called Arduino IDE (Integrated Development Environment), which is
used to write and upload the computer code to the physical board.

The key features are −

● Arduino boards are able to read analog or digital input signals from different sensors
and turn it into an output such as activating a motor, turning LED on/off, connect to the
cloud and many other actions.
● You can control your board functions by sending a set of instructions to the
microcontroller on the board via Arduino IDE (referred to as uploading software).
● Unlike most previous programmable circuit boards, Arduino does not need an extra
piece of hardware (called a programmer) in order to load new code onto the board.
You can simply use a USB cable.
● Additionally, the Arduino IDE uses a simplified version of C++, making it easier to
learn to program.
● Finally, Arduino provides a standard form factor that breaks the functions of the
micro-controller into a more accessible package

Figure 4. ARDUINO BOARD

15
ARDUINO IDE

The Arduino Integrated Development Environment (IDE) is a cross-platform application


(for Windows, macOS, Linux) that is written in the programming language Java. It is used
to write and upload programs to Arduino compatible boards, but also, with the help of 3rd
party cores, other vendor development boards.

The source code for the IDE is released under the GNU General Public License, version 2.
The Arduino IDE supports the languages C and C++ using special rules of code structuring.
The Arduino IDE supplies a software library from the Wiring project, which provides many
common input and output procedures. User-written code only requires two basic functions,
for starting the sketch and the main program loop, that is compiled and linked with a
program stub main() into an executable cyclic executive program with the GNU toolchain,
also included with the IDE distribution. The Arduino IDE employs the program avrdude to
convert the executable code into a text file in a hexadecimal encoding that is loaded into the
Arduino board by a loader program in the board's firmware.

3.3.2SENSORS

GSR (Galvanic Skin Response):

Figure 5. GSR Sensor

16
The Galvanic Skin Response (GSR), also named Electrodermal Activity (EDA) and Skin
Conductance (SC), is the measure of the continuous variations in the electrical
characteristics of the skin, i.e., for instance, the conductance, caused by the variation of the
human body sweating. The traditional theory of the GSR analysis is based on the assumption
that skin resistance varies with the state of sweat glands in the skin. Human body sweating is
regulated by the Autonomic Nervous System (ANS). In particular, if the sympathetic branch
(SNS) of the autonomic nervous system is highly aroused, then sweat gland activity also
increases, which in turn increases skin conductance, and vice-versa.

In this way, skin conductance can be a measure of the human Sympathetic Nervous System
responses. Such a system is directly involved in the emotional behavioural regulation in
humans. Additional studies highlighted the relationship between GSR signal and some
mental states, such as stress, drowsiness and engagement. The GSR signal is very easy to
record: in general, just two electrodes put at the second and third finger of one hand are
necessary. The variation of a low-voltage applied current between the two electrodes is used
as a measure of the EDA. Recently, new commercial healthcare devices more and more
wearable and fancy (bracelets, watches) have been developed, thus such measure is usable in
each research activity in the neuroscience domain also in no-laboratory settings.

17
SKIN TEMPERATURE SENSOR:

Figure 6. Temperature Sensor

A temperature sensor is a device, usually an RTD (resistance temperature detector) or a


thermocouple, that collects the data about temperature from a particular source and
converts the data into an understandable form for a device or an observer. Temperature
sensors are used in many applications like HVand AC system environmental controls, food
processing units, medical devices, chemical handling and automotive under the hood
monitoring and control systems, etc.

thermistor temperature sensor, which is relatively inexpensive, adaptable, and easy to use.
It changes its resistance when the temperature changes like RTD sensor. Thermistors are
made from manganese and oxides of nickel, which make them susceptible to damages. So,
these materials are called ceramic materials. This thermistor offers higher sensitivity than
the resistor temperature detectors. Most of the thermistors have a negative temperature
coefficient. It means when the temperature increases the resistance decreases.

18
ROOM TEMPERATURE SENSOR and HUMIDITY SENSOR:

Figure 7. Room Temperature and Humidity Sensor

Humidity is the measure of the amount of water vapor present in the air. Humidity is
calculated as Relative humidity and Absolute humidity. For industrial and medical
environments relative humidity becomes an important factor. A rise in the values of humidity,
beyond threshold levels, can lead to malfunctioning of control systems, errors in weather
prediction systems. So, as a security and safety factor, measurement of humidity values is very
important. Humidity sensors are used to measure the humidity values. Relative sensors also
measure air temperature.

19
4. IMPLEMENTATION

Figure 8. Implementation flow chart

4.1Data Collection
Data collection and gathering is the first and the most crucial step of machine learning project, as
this forms the core of the predictions and classifications. Emotions of participants in different
situations was recorded with help of hardware sensors.
Participants were engaged in calm conversations, also their a few emotion triggering videos were
played to record the response. The process was important to understand how people react in different
conditions.

The data was recorded in form of sensor data in a CSV file. Along with physiological signals,
environmental factors were also recorded that affects these reading, to attain accuracy and proper
comparison.

20
4.1.1 Hardware Implementation for Data Collection

Figure 9. Hardware Connection

In a hardware implementation, we have two sensors which are connected to


Arduino microcontroller for receiving the data from sensors.

In this diagram, we use Arduino UNO MicroController, Temperature Sensor(DS18B20),


GSR(Galvanic Skin Response), 4.7KOhm Resistor, BreadBoard and wires. Red
wire(VCC) is used for power supply which is connected with Arduino UNO 5V pin to
breadboard and then connected to Temperature sensor and GSR sensor. Black wire(GND)
is for Ground which connected with Arduino UNO ground(GND) pin to the breadboard
and then connected to both sensors. The blue wire is for receiving sensor data which is
connected with Arduino UNO Analog pin A0 to GSR sensor. The yellow wire is also for
receiving data but this is used for receiving the Temperature Sensor data which is
connected with Arduino UNO Digital pin 7 to Temperature sensor GND pin. Grey wire is
to connect the 4.7K resistor. The resistor is connected with 5V(VCC) pin and Data
pin(Yellow wire) of Temperature Sensor. The DHT11 is a commonly used Temperature
and humidity sensor. the data pin is connected to an I/O pin of the MCU and a 5K pull-
up resistor is used. This data pin outputs the value of both temperature and humidity as
serial data. If you are trying to interface DHT11 with Arduino then there are ready-made
libraries for it which will give you a quick start.

21
4.2 Data Analysis
Data analysis is the next step in which the collected data is studied to remove redundancies, or
ambiguity.
The is studied with help of visual tools to understand the paradigm. The collected data with
appropriate labels are fed to tools for visual understanding.
Graphs can be drawn between several parameters, but to understand how emotions are
effected by different sensors involved, the graphs cover this crucial comparison.

Figure 10. STATUS vs SKIN


TEMP

22
Figure 11. GSR vs STATUS

Figure 12. ROOM TEMP vs


STATUS

23
Figure 13. HUMIDITY vs STATUS

24
4.3 CLASSIFICATION and PREDICTION

Based on the analyzed data-set, classification followed by prediction of unlabeled data is


performed using different Machine Learning algorithms. The algorithm to be considered
depends on size of the data-set, nature of the values and complexity of the problem.

Machine learning algorithms can be supervised or unsupervised. Supervised machine leaning


algorithms works on labeled data-set and gives labels to new inputs based on its previous
knowledge. Whereas in unsupervised algorithms these labels are absent, classification on
basis of character similarities takes place. After the model gets trained by with the help of
data-set new values are classified under different labels by the algorithm and this is known as
prediction.

The project has majorly used 2 algorithms:

1. KNN

2. Neural Network

The two algorithms fall into very different domains as their principle of working is very
different, yet they can help us draw a comparison between Machine learning and Deep
Learning as Neural networks falls under subset of machine learning called deep learning.

4.3.1 Implementation of KNN Algorithm

KNN stands for K- Nearest Neighbour. It is one of the simplest algorithms in machine
learning. It is a supervised machine learning algorithms which means it works on labeled data.
It has multiples uses such as classification, regression and searching.
The KNN algorithm assumes that similar things exist in close proximity, i.e in their
neighbourhood. The basic principle behind working of KNN is that it stores the all the
available data and classifies new data point on the basis of similarities.

25
Figure 14. Example of KNN

This is an example how KNN would classify the data points into four categories namely
Happy, Sad, Angry and Neutral.
Steps involved to classify a new data point by KNN:
1. After loading the data, choose K value of the neighbors.

2. Then calculate the distance (Euclidean distance) between new data point with K nearest
neighbors.

3. Add the distance and index (status number for example 0 for Happy) to obtain an ordered
collection.

4. Sort the distance and indices in ascending order by distance.

5. Select the first K entries from the sorted list.

6. Get the labels for selected entries.

26
7. Return the mode in case of classification problem.

Choosing the right value of K:


1. There is no fixed formula for the best value of K, to choose the right the value run the
algorithm several times to decrease the error.
2. As we decrease the value the predictions tends to become more unstable and inaccurate. For
K=1 the decision is taken about nearest neighbour on one value.
3. As we increase the value of K, it’s accuracy also increases as the average value of several
data points is being considered.
4. Since it is like a voting system we take value to be an odd number for tiebreaker situations.

For emotion prediction using KNN with 4 labels, K = 5.

27
4.3.2 Neural Network

Neural Network are set of algorithms that closely mimic a human brain. They are modeled
such that they recognize patterns and feature in the given data to anticipate and predict new
data values. The interpretation of sensory data is done through numerical perception,
clustering, or labeling. They cluster or classify the data on top of layer of raw data by
analyzing similarities between them.
For example, when an image of an apple is showed to a child he stores few features to
recognize it in future like colour and size, next time when we show it an image of a tomato he
thinks it to be an apple so when he is told that it is a tomato he stores more information
regarding feature of an apple making his predictions more accurate. The same concept is used
in neural networks, and the error and correction procedure is achieved by layers in neural
network.

Neural Network are multi-layer network of neurons that helps in classification and prediction.
Classification is achieved with labeled data which makes it supervised learning method,
whereas clustering can be done on unlabeled data making it unsupervised learning.

As mentioned deep learning algorithm involves layers for predictions, these layers are made
up of nodes. Node can be imagined as neurons in human brain, which activates when
encounter certain stimuli. A node combines data and input with weight that boosts or
decreases the effect of that input, giving meaning to that input. The input weight product are
summed and passed through activation function, to determine it’s ultimate effect on the
output.

[]
x1
x2
x3 ∗[ w1 w 2 w 3 w 4 w 5 ]=[ output ]=facticatin[ output ]=finaloutput .
x4
x5
where x is for inputs and w is for weights.

28
Figure 15. Neural Network

In the node layer, neurons acts like trigger functions that on and off when input is fed to the system.

Figure 16. Neural Network Layer Architecture

29
Feedforward Network

The ultimate goal is to reach to a stage with minimum error as much as possible. It work as
loop in which we can run the same function several times with adjusted inputs till they become
independent and sufficient to give accurate output. It is a repetitive process with guesses and
adjustment of weights and hence the input to reach accurate results and predictions.
In each step we’ll be updating the weights to adjust the input reach close to predictions. For
example when an apple was shown to a child he recognizes it by it’s colour but when a tomato
is shown he makes a mistake and adjust his predictions by involving shape and size with
colour which is what we call adjustment of error in neural networks.
This collection of weights from beginning till end is known as model, as it maps a relationship
between input, output and error and it keeps on changing as neural network updates it’s
parameters. Our objective is to minimize the cost function where cost function is
approximation of how wrong are predictions are, with respect to actual outcome.

It’s working formulas:


Input * Weight = Guess

Ground truth - guess = error

Error * weight’s contribution = adjustment

Cost function:
Mean squared error as cost function:

MSE = Sum [ ( Prediction - Actual )² ] * (1 / num_observations)

30
Gradient Descent Optimizer
The weights are adjusted according to gradient descent optimizer. As each part of gradient
tells us how the cost function would change. As a neural network learns, it slowly adjusts
many weights so that they can map signal to meaning correctly. The relationship between
network Error and each of those weights is a derivative, dE/dw, that measures the degree to
which a slight change in a weight causes a slight change in the error.

derror derror dactivation


= ∗
dweight dactivation dweight
Backward propagation
It is also the feedback system of neural network in which error propagates back towards the
previous layers for adjusting weights.

31
We calculate the error for each neuron, starting from the layer closest to output back to the
first layer of model.

In Emotion prediction using neural networks we have used six hidden layers. For first layer 30
neurons are took each having 7 input parameters of dataset with activation function relu.For
second, third, fourth and fifth layer 30 neurons are took each having automatic input feed from
the previous layer with activation function relu.
Final layer for output with only 1 neuron as it's going to output a single value.

32
5. RESULT AND CONCLUSION

On studying and analyzing results for obtained data-set it was found that for the given data-set
KNN is working efficiently, the only only drawback with KNN is it’s computational speed
and it’s ability to respond to dynamic changes in data and introduction of new values. Whereas
neural network is known for its learning capabilities as it is part of deep learning and in the
given was efficient and fast as well. For the given data-set it’s accuracy was decent as
parameters were distinct but for expanding purposes that is involvement of new parameters to
reach more close to human emotions neural network can be very helpful.

ALGORITHM AVERAGE ACCURACY


KNN 93%
Neural Network 83%

Figure 17 Output using Neural network

33
Figure 18 Output using Neural network

Figure 19 Output using KNN

34
Figure 20 Output using KNN

Prediction can be performed by various algorithms and the initiate to study to extreme polls
was to analyse working and reaction of two different algorithms on same data-set obtained
under numerous situations using four sensors. Two sensors GSR and Skin Temperature that
measures skin conductance and skin temperature compromising physiological features under
study and other two sensor Room Temperature and Humidity that affects the performance of
our main sensors, as with the change in environmental conditions there are changes in average
values of skin temperature and skin conductance which can affect our predictions. This also
brings an observation that how human emotions are affected by environment and temperature.

It has been found that how human emotions affect physiological signals of human body and by
studying them a close predictions of human psychology can be drawn which in itself can help
the individual to understand his or mind and hence with understanding it can allow him to
control his or her reactions. This can help in early diagnosis of many mental health issues, also
many sever illness that doesn’t show physical by some psychological symptoms.

35
6. REFERENCE
1. EMOTION RECOGNITION VIA GALVANIC SKIN RESPONSE: COMPARISON

OF MACHINE LEARNING ALGORITHMS AND FEATURE EXTRACTION

METHODS Deger AYATA1 , Yusuf YASLAN1 , Mustafa KAMASAK1 [Deger

AYATA et al.. / IU-JEEE Vol. 17(1), (2017), 3129-3136]

2. WEARABLE EMOTION RECOGNITION SYSTEM BASED ON GSR [Session:

Understanding and Promoting Personal Health MMHealth’17, October 23, 2017,

Mountain View, CA, USA]

3. GALVANIC SKIN RESPONSE DATA CLASSIFICATION FOR EMOTION

DETECTION [International Journal of Electrical and Computer Engineering (IJECE)

Vol. 8, No. 5, October 2018, pp. 4004~4014 ISSN: 2088-8708, DOI:

10.11591/ijece.v8i5.pp4004-4014 ]

36
7. APPENDIX

Source Code

//HARDWARE CODE

#include <OneWire.h>
#include <DallasTemperature.h>
#include <dht.h>
#define ONE_WIRE_BUS 5
#define dht_apin A0
const int GSR=A2;
int threshold=0;
int sensorValue;
OneWire oneWire(ONE_WIRE_BUS);
DallasTemperature sensors(&oneWire);
dht DHT;
float Celcius=0;
void setup(void)
{
pinMode(13,OUTPUT);
Serial.begin(9600);
digitalWrite(13,HIGH);
Serial.begin(9600);
}

void loop(void)
{
digitalWrite(13,HIGH);
sensors.requestTemperatures();
37
Celcius=sensors.getTempCByIndex(0);
Serial.print(Celcius);//skin Temperature
Serial.print(",");
sensorValue=analogRead(GSR);
Serial.print(sensorValue);//GSR
Serial.print(",");
DHT.read11(dht_apin);
Serial.print(DHT.humidity);//Humidity
Serial.print(",");
Serial.println(DHT.temperature);//Room Temperature
delay(3000);
}
//end of hardware code

// Code for Neural network

import matplotlib.pyplot as plt


import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import r2_score
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
import pickle

#Creating the data frame from the csv file.


df = pd.read_csv("combine.csv") 38
#Adding the independent variable to x.
x = df[['GSR','C','F']]
#Adding the variable to be predicted that is dependent variable to y.
y = df['Status']
#Splitting the dataset into 80% training and 20 % for testing that test size=0.2
signify this.
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2)

#Remove these comments if data is not scaled means there is difference between
magnitude of value of different columns of dataset.
#x_train = preprocessing.scale(x_train)
#x_test = preprocessing.scale(x_test)

#Defining the linear model of neural network


model = Sequential()
#Adding the layers.
#For first layer 30 neurons are took each having 7 input parameters of dataset
with activation function relu.
model.add(Dense(30,input_shape=(7,),activation='relu'))
#For second, third, fourth and fifth layer 30 neurons are took each having
automatic input feed from the previous layer with activation function relu.
model.add(Dense(30,activation='relu'))
model.add(Dense(30,activation='relu'))
model.add(Dense(30,activation='relu'))
model.add(Dense(30,activation='relu'))
#Final layer for output with only 1 neuron as it's going to output a single value.
model.add(Dense(1,))
39
#Adam is just like stochastic gradient descent optimizer but give better
performance.
#Mean squared error is difference between actual and predicted values.
model.compile(Adam(lr=0.003),'mean_squared_error')
#Epoch is number of time to cycled through all the data
#Verbose controls whether certain things are printed at each epoch and 0 means
it is off.
#Validation split is random sample of the training data used to check the
accuracy of model, it is set to 0.1 means 10% of training data.
history = model.fit(x_train,y_train,epochs=800,validation_split=0.1,verbose=0)
#Creating dictionary of loss history.
history_dict = history.history

#Serializing the trained model object using pickle for later use or saving the
model to a file.
filename = 'finalized_model.sav'
pickle.dump(model,open(filename,'wb'))

#Plotting model's training cost/loss and model's validation split cost/loss


loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
plt.figure()
plt.plot(loss_values,'bo',label='training loss')
plt.plot(val_loss_values,'r',label='val training loss')
plt.show()
y_train_pred = model.predict(x_train)
y_test_pred = model.predict(x_test)
print(y_train_pred)
print(y_train)

print("The accuracy on the Train set is:\


40
t{:0.3f}".format(r2_score(y_train,y_train_pred)*100))
print("The accuracy on the Test set is:\
t{:0.3f}".format(r2_score(y_test,y_test_pred)*100))

// Code for KNN

# -*- coding: utf-8 -*-

# Form implementation generated from reading ui file 'Sensors.ui'


#
# Created by: PyQt5 UI code generator 5.14.2
#
from PyQt5 import QtCore, QtGui, QtWidgets
import serial
import time, threading
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import pandas as pd

global ser
ser = serial.Serial('COM3', baudrate=9600, timeout=10,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS
)

class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(754, 531)
41
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(190, 0, 371, 51))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(20)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setObjectName("label")
self.SkinTemp = QtWidgets.QLabel(self.centralwidget)
self.SkinTemp.setGeometry(QtCore.QRect(200, 100, 181, 31))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.SkinTemp.setFont(font)
self.SkinTemp.setAlignment(QtCore.Qt.AlignCenter)
self.SkinTemp.setObjectName("SkinTemp")
self.labelcm = QtWidgets.QLabel(self.centralwidget)
self.labelcm.setGeometry(QtCore.QRect(20, 100, 171, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.labelcm.setFont(font)
self.labelcm.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm.setObjectName("labelcm")
self.Predict = QtWidgets.QPushButton(self.centralwidget)
self.Predict.setGeometry(QtCore.QRect(260, 360, 171, 51))
self.Predict.setObjectName("Predict")

self.Predict.clicked.connect(self.predictionfun)
42
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(320, 260, 111, 31))
self.label_2.setText("")
self.label_2.setObjectName("label_2")
self.Humidity = QtWidgets.QLabel(self.centralwidget)
self.Humidity.setGeometry(QtCore.QRect(200, 160, 181, 31))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.Humidity.setFont(font)
self.Humidity.setAlignment(QtCore.Qt.AlignCenter)
self.Humidity.setObjectName("Humidity")
self.labelcm_2 = QtWidgets.QLabel(self.centralwidget)
self.labelcm_2.setGeometry(QtCore.QRect(10, 160, 171, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.labelcm_2.setFont(font)
self.labelcm_2.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm_2.setObjectName("labelcm_2")
self.GSR = QtWidgets.QLabel(self.centralwidget)
self.GSR.setGeometry(QtCore.QRect(560, 100, 181, 31))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.GSR.setFont(font)
self.GSR.setAlignment(QtCore.Qt.AlignCenter)
self.GSR.setObjectName("GSR")
self.labelcm_3 = QtWidgets.QLabel(self.centralwidget)
self.labelcm_3.setGeometry(QtCore.QRect(390, 100, 171, 41))
font = QtGui.QFont()
font.setFamily("Calibri") 43
font.setPointSize(12)
self.labelcm_3.setFont(font)
self.labelcm_3.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm_3.setObjectName("labelcm_3")
self.RoomTemp = QtWidgets.QLabel(self.centralwidget)
self.RoomTemp.setGeometry(QtCore.QRect(560, 160, 181, 31))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.RoomTemp.setFont(font)
self.RoomTemp.setAlignment(QtCore.Qt.AlignCenter)
self.RoomTemp.setObjectName("RoomTemp")
self.labelcm_4 = QtWidgets.QLabel(self.centralwidget)
self.labelcm_4.setGeometry(QtCore.QRect(390, 160, 171, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.labelcm_4.setFont(font)
self.labelcm_4.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm_4.setObjectName("labelcm_4")
self.labelcm_5 = QtWidgets.QLabel(self.centralwidget)
self.labelcm_5.setGeometry(QtCore.QRect(30, 300, 171, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.labelcm_5.setFont(font)
self.labelcm_5.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm_5.setObjectName("labelcm_5")
self.Status = QtWidgets.QLabel(self.centralwidget)
self.Status.setGeometry(QtCore.QRect(150, 300, 171, 41))
font = QtGui.QFont() 44
font.setFamily("Calibri")
font.setPointSize(12)
self.Status.setFont(font)
self.Status.setAlignment(QtCore.Qt.AlignCenter)
self.Status.setObjectName("Status")
self.labelcm_7 = QtWidgets.QLabel(self.centralwidget)
self.labelcm_7.setGeometry(QtCore.QRect(380, 300, 131, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.labelcm_7.setFont(font)
self.labelcm_7.setAlignment(QtCore.Qt.AlignCenter)
self.labelcm_7.setObjectName("labelcm_7")
self.Accuracy = QtWidgets.QLabel(self.centralwidget)
self.Accuracy.setGeometry(QtCore.QRect(500, 300, 221, 41))
font = QtGui.QFont()
font.setFamily("Calibri")
font.setPointSize(12)
self.Accuracy.setFont(font)
self.Accuracy.setAlignment(QtCore.Qt.AlignCenter)
self.Accuracy.setObjectName("Accuracy")
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)

self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)

def retranslateUi(self, MainWindow):


45
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Live Sensors
Data"))
self.label.setText(_translate("MainWindow", "Live Sensors Data"))
self.SkinTemp.setText(_translate("MainWindow", "0"))
self.labelcm.setText(_translate("MainWindow", "Skin Temperature : "))
self.Predict.setText(_translate("MainWindow", "Predict"))
self.Humidity.setText(_translate("MainWindow", "0"))
self.labelcm_2.setText(_translate("MainWindow", "Humidity :"))
self.GSR.setText(_translate("MainWindow", "0"))
self.labelcm_3.setText(_translate("MainWindow", "GSR :"))
self.RoomTemp.setText(_translate("MainWindow", "0"))
self.labelcm_4.setText(_translate("MainWindow", "Room Temperature :"))
self.Status.setText(_translate("MainWindow", "0"))
self.labelcm_5.setText(_translate("MainWindow", "Status :"))
self.Accuracy.setText(_translate("MainWindow", "0"))
self.labelcm_7.setText(_translate("MainWindow", "Accuracy :"))
# User Code
self.timeout = 0
self.check_serial_event()

def check_serial_event(self):
self.timeout += 1
# print (self.timeout)
serial_thread = threading.Timer(1, self.check_serial_event)
if ser.is_open == True:
serial_thread.start()
if ser.in_waiting:
eol = b'\n'
leneol = len(eol)
line = bytearray()
while True: 46
cc = ser.read(1)
if cc:
line += cc
if line[-leneol:] == eol:
break
else:
break
# print (line)
# print (type(line))
line = line.rstrip()
data = line.decode("utf-8")
global a,b,c,d
a,b,c,d = data.split(",")
self.SkinTemp.setText(a)
self.GSR.setText(b)
self.Humidity.setText(c)
self.RoomTemp.setText(d)
# print (distance)
self.timeout = 0

if self.timeout >= 10:


ser.close()
def predictionfun(self):
df = pd.read_csv("combine.csv")
x = df[['SkinTemp','GSR','Humidity','RoomTemp']]
y = df['Status']

X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=0)


knn = KNeighborsClassifier(n_neighbors=1)

knn.fit(X_train, y_train) 47
X_new = np.array([[a,b,c,d]])
#print(X_new.shape)

self.prediction = knn.predict(X_new)
if self.prediction == 0:
self.prediction='Happy'
if self.prediction == 1:
self.prediction='Sad'
if self.prediction == 2:
self.prediction='Normal'
if self.prediction == 3:
self.prediction='Angery'
self.Status.setText(str(self.prediction))
self.Accuracy.setText(str(knn.score(X_test, y_test)))
#print(prediction)

# We obtain an outcome of "0", which, if we consult the target names,


# this corresponds to the Setosa species.
#print(df['Status'][prediction])

# Overall predication score for how likely our model was able to correctly
# predict the species of flower on the test set.
#print(knn.score(X_test, y_test))

if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow() 48
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())

49
1

You might also like