Final Report 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 71

CHILD CARE

PROJECT REPORT

Submitted by

ANU RAJAN (ICE20MCA-2014)

to

the APJ Abdul Kalam Technological University

in partial fulfillment of the requirements for the award of the Degree


of
Master Of Computer Applications

Department of Computer Applications

Ilahia College of Engineering and Technology, Mulavoor P.O


Muvattupuzha
July 2022
DECLARATION

I undersigned hereby declare that the report CHILD CARE, submitted for the partial
fulfillment of the requirements for the award of degree of Master of Computer
Applications of the APJ Abdul Kalam Technological University, Kerala is a bonafide
work done by me under supervision of Asst. Prof. Sheena K M. This submission
represents our ideas in my own words and where ideas or words of others have been
included, I have adequately and accurately cited and referenced the original sources. We
also decide that we have adhered to ethics of academic honesty and integrity and have
not misrepresented or fabricated any data or idea or fact or source in my submission. I
understand that any violation of the above will be a cause for disciplinary action by the
institute and/or the University and can also evoke penal action from the sources which
have thus not been properly cited or from whom proper permission has not been
obtained. This report has not been previously formed the basis for the award of any
degree, diploma or similar title of any other University.

Place: Mulavoor Anu Rajan


Date:
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
MGM COLLEGE OF ENGINEERING AND TECHNOLOGY,
PAMPAKUDA, MUVATTUPUZHA

CERTIFICATE

This is to certify that the report entitled CHILD CARE submitted by Anu Rajan
(ICE20MCA-2014) to the APJ Abdul Kalam Technological University in partial
fulfillment of the requirements for the award of the Degree of Master of Computer
Applications is the bonafide record of the project work carried out by her under our
guidance and supervision. This report in any other form has not been submitted to any
other University or Institute for any purpose.

Asst. Prof. Sheena K M Asst. Prof. Livin Miranda Asst. Prof. Anoop R
(Project Guide) (Project Coordinator) (Head of the
Department)
ACKNOWLEDGEMENT

First and foremost, I thank God Almighty for his divine grace and blessings in making
all this possible. It’s our privilege to render our heartfelt thanks and gratitude to our most
beloved Principal Prof. Dr. Abdul Gafur M for providing excellent digital library
facilities and permission to use the same. I am deeply thankful to our Head of the
Department, Asst. Prof. Anoop R, for his support and encouragement. We express our
sincere gratitude to our project guide Asst. Prof. Sheena K M, Department of Computer
Applications for her motivation, assistance, and support. We express our sincere
gratitude to project coordinator Asst. Prof. Livin Miranda, Department of Computer
Applications. I am also thankful to all the staff members of the Computer Applications
Department for their support. At last, but not least we thank all our friends and families
for their valuable feedback from time as well as their help and encouragement.

Anu Rajan

i
ABSTRACT

Modern technologies can help in solving contemporary problems as well as primitive


challenges. One of the crucial issues in parenting is babysitting. The only method of
communication for a newborn is crying. Newborns will cry when they are uncomfortable for
several reasons. Therefore, it is an exhausting task for anyone to look after a newborn because
a baby needs attention 24 hours a day, especially for parents working away from howe. The
proposed system will do real-time monitoring of the newborn for identifying emotions and
frame of mind using modern technologies like artificial intelligence and IOT. Also, the
proposed system will comfort the baby by using several techniques like playing the
prerecorded voice of parents. The system will detect when the baby is crying using machine
learning datasets, and it can also tell whether the baby is hungry or not by hand recognition
technique. The system also delivers a sleep detection and sleep summary report using a deep
learning approach.

ii
CONTENTS

Title Page No
ACKNOWLEDGEMENT i
ABSTRACT ii
LIST OF FIGURES vii
LIST OF TABLES viii
ABBREVIATIONS ix
CHAPTER 1 INTRODUCTION 1
1.1 Existing System 1
1.2. Proposed System 2
1.3 Scope 3
CHAPTER 2 LITERATURE REVIEW 4
2.1 IoT-BBMS: Internet of Things-based Baby 4
Monitoring system
2.2 Design and Development of a Smart Baby 4
Monitoring system

2.3 Discriminating Features of Infant Cry 5


Acoustic Signal for Automated Detection
of Cause of Crying

2.4 Analysis on IoT Based Smart Cradle 5


System with an Application for Baby
Monitoring
2.5 Smart Digital Parenting Using IoT 5
2.6 Baby Cry Recognition in Real-World 6
2.7 Why is my Baby Crying? An in-depth 6
Analysis of Paralinguistic Features and
Classical Machine Learning Algorithms
for Baby Cry Classification
2.8 Real-Time Baby Crying Detection in the 6

iii
Noisy Everyday Environment
CHAPTER 3 REQUIREMENT ANALYSIS 8
3.1 Functionalities 8
3.1.1 Cry Detection 8
3.1.2 Hand Sign Detection 9
3.1.3 Emotion Detection 9
3.1.4 Sleep Detection 10
3.2 Terminologies 10
3.3 Hardware Requirements 11
3.4 Software Requirements 11
3.5 Other Prerequisites 11
3.6 Modules 12
3.6.1 Cry Detection 12
3.6.2 Hand Sign Detection 12
3.6.3 Emotion Detection 12
3.6.4 Sleep Summary 12
CHAPTER 4 SYSTEM DESIGN 14
4.1 Methodology 14
4.2 UML Diagrams 16
4.2.1 Usecase Diagram 17
4.2.1.1 Register 18
4.2.1.2 Login 18
4.2.1.3 Upload Video 18
4.2.1.4 Emotion And Hand 18
Gesture Detection
4.2.1.5 Detect Cry 18
4.2.1.6 Sleep Detection 18
4.2.1.7 Sleep Summary 18

iv
4.2.1.8 Notification 18
4.2.2 Activity Diagram 19
4.2.2.1 Login Or Register 19
4.2.2.2 Emotion Detection 19
4.2.2.3 Cry Detection 20
4.2.2.4 Gesture Detection 22
4.2.2.5 Sleep Detection 23
4.2.3 Class Diagram 24
4.2.3.1 Parent Registration Class 24
4.2.3.2 Monitoring Class 24
4.2.3.3 Notification Class 25
4.2.3.4 Report Class 25
4.2.4 Sequence Diagram 26
4.2.5 Deployment Diagram 27
4.3 Data Flow Diagram 28
4.4 User Interface 31
4.4.1 User Registration Interface 31
4.4.2 Login Interface 32
4.4.3 User Home Interface 32
4.4.4 Notification tab 34
4.4.5 Email notification 34
4.5 Database design 35
4.6 Project Workflow 36
CHAPTER 5 PROJECT WORK PLANNING 37
5.1 Gantt Chart 37
CHAPTER 6 IMPLEMENTATION AND TESTING 38
6.1 Implementation 38
6.1.1 Registration and Login module 38

v
6.1.2 Training module 38
6.1.3 Detection of Input Video 41
6.2 Testing 41
6.2.1 Test case 41
6.2.2 Unit Testing 42
6.2.3 Integration Testing 42
6.2.4 Validation Testing 42
6.2.5 Black Box Testing 42
6.2.6 White box Testing 42
6.2.7 Test cases 43
6.3 Screenshots 45
CHAPTER 7 CONCLUSION 50
REFERENCES 51
APPENDIX 53

vi
LIST OF FIGURES
No Title Page No
4.1 Proposed System 14
4.2.1 Usecase Diagram 17
4.2.2.1 Activity Diagram for Login 19
4.2.2.2 Activity Diagram for Emotion Detection 20
4.2.2.3 Activity Diagram for Cry Detection 21
4.2.2.4 Activity Diagram for Hand Gesture Detection 22
4.2.2.5 Activity Diagram for Sleep Detection 23
4.2.3 Class Diagram 24
4.2.4 Sequence Diagram 26
4.2.5 Deployment Diagram 27
4.3.1 Context Level and Level 1 DFD 28
4.3.2 Level 2 DFD 28
4.3.3 Level 3 DFD 30
4.4.1 User Registration Interface 31
4.4.2 User Login Interface 32
4.4.3 Child Care Home Page Interface 33
4.4.4 Notification Page 34
4.4.5 Email Notification 34
4.5 Database Design 35
4.6 Workflow Diagram 36
5.1 Gantt Chart 37
6.1 CNN 39
6.4.1 Monitor the Input Video 45
6.4.2 Display Notification 45
6.4.3 Email Notification 46
6.4.4 Admin Login Interface 46
6.4.5 Admin Interface 47
6.4.6 Add User Interface 47
6.4.7 Count of Users and User Details 48
6.4.8 Count for Sleep Time 48
6.4.9 Sleep Summary 49

vii
LIST OF TABLES

No Title Page No

4.5 Database for child care 35


6.3.1 Test case for User Login Page 43
6.3.2 Test Case for Home page 44
6.3.3 Test Case for Video Detection 44

viii
ABBREVIATIONS

AI Artificial Intelligence
BBMS Baby Monitoring System
CNN Convolutional Neural Network
IoT Internet of Things
NLTK Natural Language Tool Kit

STE Short Time Energy


ZCR Zero Crossing Rate

ix
CHAPTER 1

INTRODUCTION

Babies are the precious God’s gift. So we have to be very alert within our baby. Our presence
and care is the most valuable thing that the baby deserves. But in this new modern world,
joint families mutated into nucleus families, both parents started working and due to many
other causes we are lacking to provide our 100% attention to babies. To minimize this hard
condition the baby monitoring system is introduced. As the title indicates, this system is only
introduced to create our presence around the baby and we will be always notified and alerted
about the baby’s condition. Newborns are communicating by crying. This way the newborn
expresses his/her physical and emotional state and his/her needs. The reasons for a baby to
cry are: hunger, tiredness, pain, discomfort, pain, colic, eructation, flatulence, the need for
attention, and so on. As a parent, one wants to act as fast as possible when your child is
crying, but the parents don’t always understand why their baby is crying and they become
frustrated. Gestures can originate from any bodily motion or state but commonly originate
from the face or hand. Folded hand represents hunger and an open hand represents a full
stomach. We recognize facial emotion of the baby. Automatically generate music or stories
for the babies. Also create a sleeping report.In a simple example, consider the baby is in the
bedroom while the parents are working in the garden, kitchen or they may be resting etc.
sudden changes in baby’s emotion and state alerts us. The accurate monitoring system
analyses the condition and creates a parent's presence by various sound productions. It also
detects the hunger mode of the baby and informs the parent. To avoid more confusion, let’s
be clear that the system is for an absolute monitoring purpose. It cannot replace or do what
parents must do. And one more thing, the parent can’t leave the baby. They are supposed to
be in an easy reach distance, say baby in room and parent in garden.

1.1 EXISTING SYSTEM

A baby monitoring system has been proposed in which an enhanced noise cancelling system
that monitors the baby and reduces sound pollution has been suggested. The main function of
the system is to reduce the noise that might disturb the baby by playing relaxing songs. This
system can also adjust the room’s light intensity with the aid of a light sensor. However, our
1
system has more advanced features, such as supporting real-time monitoring over the IoT
network and vision monitoring using web camera. Goyal and Kumar introduced an E-baby
cradle that can swing automatically when it detects crying and stops swinging when the
crying stops. The speed for the swinging cradle can be controlled based on the user’s need. It
has an alarm embedded in the system, which notifies the user when two conditions occurred.
First, the alarm goes off when the mattress is wet, indicating that the mattress should be
changed. Second, when the baby does not stop crying for a certain time, the alarm alerts the
parents to attend to their baby. However, it is only applicable when parents are near the
cradle, because it only uses a buzzer alarm, the sound of which might frighten the baby.
Parents cannot monitor their baby when they are away from howe, for example when at work
or when traveling to other places. A similar automatic baby monitoring system was proposed
in . The authors developed a low- budget system that swings the cradle when the crying sound
is detected, and the cradle stops when the baby stops crying. The built-in alarm goes off under
either one of the following conditions: the mattress is wet or the baby does not stop crying
after a certain period. A video cawera is placed above the cradle to monitor the baby.
However, the parents can only receive the notification via SMS and cannot control the
system.

1.2 PROPOSED SYSTEM

This proposed system is a breakthrough technology offers parents the ability to check on
their baby from afar via custom smart alerts for greater peace of mind, all in a playful design.
It offers a comprehensive series of smart features including baby cry detection, sleep
detection and sleep summery and speaker capabilities to record signing or storytelling and
replay it with astonishing quality. The advanced sleep cycle insights and sleep quality reports
help parents track habits and development and an easily understandable software interface
allows parents to check on the baby and receive activity notifications.

1.3 SCOPE
The proposed system is a website that can take care of new born babies. Detection of hunger
of the babies and analysing the facial expressions notify the parents. Also generates sleeping
2
summary report. The proposed system is1 a trusted friend and an extra pair of eyes to help
make

3
parenting a little bit easier within those precious first few years. It is an all-in-one intelligent
companion for our baby or toddler. This introduced system will be the beginning of a new era
of babysitting.

4
CHAPTER 2

LITERATURE REVIEW

2.1 IoT-BBMS: Internet of Things-based Baby Monitoring System for Smart Cradle

IEEE 2019 8TH International Conference ,Malaysia Authors: Waheb A. Jabbar, Hiew Kuet
Shang1 , Saidatul N. I. S. Hamid1 , Akram A. Almohammedi

Internet of Things-based Baby Monitoring System (IoT-BBMS) is proposed as an efficient


and low-cost IoT-based system for monitoring in real time. Here proposed a new algorithm
for the system that plays a key role in providing better baby care. In the designed system,
Node Micro- Controller Unit (NodeMCU) Controller Board is exploited to gather the data
read by the sensors and uploaded via Wi-Fi to the AdaFruit MQTT Server. The proposed
system exploits sensors to monitor baby’s vital parameters, such as ambient temperature,
moisture, and crying. The system architecture consists of a baby cradle that will
automatically swing using a motor when the baby cries. Parents can also monitor their baby
through an external web camera and switch on the lullaby toy located on the baby cradle
remotely via the MQTT server to entertain the baby. Finally, the baby monitoring system is
proven to work effectively in monitoring the baby’s situation and surrounding conditions
according to the prototype.

2.2 Design and Development of a Smart Baby Monitoring System based on Raspberry
Pi and Pi Camera

IEEE 2017 4TH International Conference , Dhaka, Bangladesh Authors: Aslam Forhad
Symon, Nazia Hassan,Humayun Rashid.

Smart Baby Monitoring System based on Raspberry Pi and Pi Camera system can detect the
baby’s motion and sound. Video output of baby’s present position can be displayed on a
display monitor so that the mother or another responsible person can watch the baby while
away from him or her. This baby monitoring system is capable of detecting motion and
crying condition of the baby automatically. The Raspberry Pi B+ module is used to mark the
total control system of the hardware, condenser MIC is used to detect baby’s crying, PIR
motion sensor is incorporated to detect baby’s movement and Pi camera is used to capture
the baby’s motion. A display I

5
used to have video output of sleeping baby. Finally, the developed hardware is tested to
analysis the capability of detecting the motion and crying sound of baby as well as the video
output

2.3 Discriminating Features of Infant Cry Acoustic Signal for Automated Detection of
Cause of Crying
2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP)
Tianjin, China Author: Vinay Kumar Mittal
Infant cry signal is a biomedical acoustic signal that is usually high-pitched. Infant cry is his
only weans of communication, production of which involves significant changes in its
characteristics. Possibly rapid changes in its short time segments carry information about the
cause of cry that a mother can perceive. The instantaneous fundamental frequency (F0) of
infant cries is much higher than for adults, necessitating different signal processing methods
for infant cry signals’ analyses. In this paper, the production characteristics of infant cry
signals due to pain vs. discomfort are discriminated, exploring features that may help towards
developing automated diagnostic systems. Production features the F0, strength of excitation
(SoE) and signal energy (E) are used, for analysing ‘Infant Cry Signals Database’ collected
for the study. The excitation source characteristics arederived using a modified zero-
frequency filtering method. Spectrograms of the acoustic signal and the excitation source
characteristics are compared to validate the changes in the features. Larger fluctuations in the
features F0, SoE and E are observed for infant cry due to pain, than due to discomfort.

2.4 Analysis on IoT Based Smart Cradle System with an Android Application for Baby
Monitoring

2019 1st International Conference on Advanced Technologies in Intelligent Control,


Environment, Computing & Communication Engineering (ICATIECE) Bangalore, India
Authors: S.Kavitha, R.R. Neela, M. Sowndarya, Madhuchandra

IoT Based Smart Cradle System with an Android Application system of interrelated
computing devices, mechanical, and digital machines that are provided with the ability to
transfer the data over a network without requiring human interaction constitutes Internet to
Things. This brings out automation of things. It is achieved through sensor and actuator
devices. This brings out a survey on various sensors and actuator which is used in the
implementation of Smart Cradle.

6
2.5 Smart Digital Parenting Using Internet Of Things
2018 International Conference on Soft-computing and Network Security (ICSNS)
Coimbatore, India Authors: P. Hemalatha,S. Matilda
Multitasking smart baby monitoring ideology using Internet of Things (IoT) is proffered to
integrate prompting process in a convergence application by adopting the existing
communication technologies such as Wi-Fi, Bluetooth, ZigBee and 2G/3G/4G.A customary
approach does not proficiently give the ongoing updates of these infant wellbeing
parameters. Hence keeps observing of physiological parameters and warning updates are
need of current situation for mother, sire, Babysitters, and creches. The proposed wellness
checking frame work for babies can improve the nature of baby parent.

2.6 Baby Cry Recognition in Real-World Conditions


2016 39th International Conference on Telecommunications and Signal Processing (TSP)
Vienna, Austria Authors: Ioana-Alina Bğnicğ,Horia Cucu,Andi Buzo ,Corneliu Burileanu

Baby Cry Recognition System propose a fully automatic system that attempts to discriminate
between different types of cries. The baby cry classification system is based on Gaussian
Mixture Models and i-vectors. The evaluation is performed on an audio database comprising
six types of cries (colic, eructation, discomfort, hunger, pain, tiredness) from 127 babies.
Neonatologist or pediatricians can distinguish between different types of cries and can find a
pattern in each type of cry. Unfortunately, this is a real problem for the parents who want to
act as fast as possible to comfort the newborn. The system show promising results despite
the difficulty of the task.

2.7 Why is my Baby Crying? An in-depth Analysis of Paralinguistic Features and


Classical Machine Learning Algorithms for Baby Cry Classification

2018 41st International Conference on Telecommunications and Signal Prosssesing


(TSP) Athens, Greece Authors:Rodica Ileana Tuduce,Horia Cucu, Corneliu Burileanu

Parents strive to identify and timely address these needs before hysterical crying sets in.
However, first-time parents usually fail, and this leads to frustration and feelings of
helplessness. In this context, the work focuses on creating an automatic system able to
distinguish between different infant needs based on crying. This system extract various sets of
paralinguistic features from the baby-cry audio signals and train various rule-based or statistical

7
classifiers. Thereby

8
evaluate and in-depth compare the results and obtain up to 70% accuracy on the evaluation
dataset.

2.8 Real-Time Baby Crying Detection in the Noisy Everyday Environment


2020 11th IEEE Control and System Graduate Research Colloquium (ICSGRC)ShahAlam ,
Malaysia Authors: Lee Sze Foo; Wun-She Yap; Yan Chai Hum; Zulaikha
;Hock Woon Hon;Yee Kai Tee
This study proposed a real-time baby crying detection algorithm that monitors the noisy
environment for baby crying on a second-by-second basis. The algorithm detected baby
crying through five acoustic features – average frequency, pitch frequency, short-time energy
(STE) acceleration, zero-crossing rate (ZCR), and Mel-Frequency cepstral coefficients
(MFCCs). The thresholds for each feature in classifying an audio segment as “crying” were
set by extracting and examining the distribution of the features of noise-free crying and non-
crying samples collected from an audio database freely available on the Internet. Later, the
algorithm was tested using noisy crying and non-crying samples downloaded from YouTube,
where an accuracy of 89.20% was obtained for the offline testing. In order to test the
robustness and performance of the designed algorithm, online testing were also conducted
using three customly composed noisy samples containing both crying and non-crying
segments. The online accuracy obtained was 80.77%, lower compared to the offline testing
which was mainly caused by the extra noise introduced by the experimental settings. With
more advanced equipment, it should be possible to increase the online testing to be closer to
the offline testing accuracy, paving the way to use the designed algorithm for reliable real-
time second-by-second baby crying detection.

9
CHAPTER 3

REQUIREMENT ANALYSIS

3.1 FUNCTIONALITIES

The proposed system is less complex and simple. Our system is fully software based.The
result is very accurate than the existing system. It has less radiation than the existing
system.In Existing System Production characteristics of infant cry signals due to pain vs.
discomfort are discriminated, exploring features that may help towards developing automated
diagnostic systems. The baby cry classification system is based on Gaussian Mixture Models
and i- vectors. The evaluation is performed on an audio database comprising six types of cries
(colic, eructation, discomfort, hunger, pain, tiredness) from 127 babies.

For parents and finally this system allows Wireless Artificial Intelligence brain talks to the
baby's sensor band to report anomalies,prevent problems and guarantee youare alerted to an
emergency.

System architecture is divided into various modules. First module is the collection of dataset.
Here the dataset is the sound produced in different scenario.The dataset is preprocessed to
collect the desired data. The data is trained using machine learning and moved to the feature
file module where the feature vectors contain multiple elements about an object. There exist a
testing data modules which contains sounds used for testing the system. The test data is
preprocessed and compare the data with dataset andfinally the output is predicted.

3.1.1 CRY DETECTION

Automatic detection of a baby cry in audio signals is an essential step in applications such as
remote baby monitoring. The first algorithm is a low-complexity logistic regression classifier,
used as a reference. To train this classifier, we extract features such as Well-frequency
cestrum coefficients, pitch and formants from the recordings. The second algorithm uses a
dedicated convolutional neural network (CNN), operating on log Well- filter bank
representation of the recordings. Performance evaluation of the algorithms is carried out using
an annotated database containing recordings of babies (0-6 months old) in domestic
1
environments. In addition to baby

1
cry, these recordings contain various types of domestic sounds, such as parents talking and
door opening. The CNN classifier is shown to yield considerably better results compared to
the logistic regression classifier, demonstrating the power of deep learning when applied to
audio processing. If the baby cries continuously without hunger, the system will play music
for the babies. Also play stories for the babies in his/her sound.

3.1.2 HAND SIGN DETECTION

Gesture recognition is a topic in computer science and language technology with the goal of
interpreting human gestures via mathematical algorithms. Gestures can originate from any
bodily motion or state but commonly originate from the face or hand. We will be aiming to
classify different images of hand gestures, which weans that the computer will have to “learn”
the features of each gesture and classify them correctly.
The human hand gestures are detected and recognized using convolutional neural networks
(CNN) classification approach. This process flow consists of hand region of interest
segmentation using mask image, fingers segmentation, normalization of segmented finger
image and finger recognition using CNN classifier. The hand region of the image is
segmented from the whole image using mask images. The adaptive histogram equalization
method is used as an enhancement method for improving the contrast of each pixel in an
image. In this paper, a connected component analysis algorithm is used in order to segment
the finger tips from the hand image. The segmented finger regions from hand image are given
to the CNN classification algorithm which classifies the image into various classes. Folded
hand images for representing hunger and open hand for full stomach.

3.1.3 EMOTION RECOGNITION

Face recognition is a method of identifying or verifying the identity of an individual using


their face. Face recognition systems can be used to identify people in photos, video, or in real-
time. As for the detection of the face, the Haar cascade classifier is used here. The classifier is
trained with positive images (images containing faces) and negative images (images lacking
face). After training of the classifier is done, it is applied to an ROI on an image and it scans
for the specific object.

Facial emotion recognition is the process of detecting human emotions from facial
expressions. The human brain recognizes emotions automatically, and software has now

1
been developed

1
that can recognize emotions as well. A Haar Cascades method is used to detect the input
image, a face, as the basis for the extraction of eyes and mouth, and then through the Sobel
edge detection to obtain the characteristic value. Through Neural Network classifier training,
six kinds of different emotional categories are obtained.

3.1.4 SLEEP DETECTION

Overnight polysomnograms are conducted at hospitals and major wedical centers. During
the study, wedical staff will watch your child sleep. Sowe even provide sleep analytics that
can help you gain insights around your baby's sleep patterns. Deep learning approaches
were designed to automatically detect sleep.Create sleeping summary of each day. Deep
learning approaches were designed to automatically detect sleep. Sleep detection help to
summarize the sleeping tiwe of the babies.

3.2 TERMINOLOGIES

Deep learning :Deep learning is a machine learning technique that teaches computers to do
what comes naturally to humans: learn by example. Deep learning is a key technology behind
driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a
lamppost.

Conventional neural network:In deep learning, a convolutional neural network (CNN, or


ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual
imagery. They are also known as shift invariant or space invariant artificial neural networks
(SIANN), based on their shared-weights architecture and translation invariance characteristics

Haar cascade classifier: It is a machine learning based approach in which a cascade function
is trained from a lot of positive and negative images. It is then used to detect objects in other
images. The algorithm needs a lot of positive images (images of faces) and negative images
(images without faces) to train the classifier

Sobel edge detection:The Sobel filter is used for edge detection. It works by calculating the
gradient of image changes at each pixel within the image. It finds the direction of the largest
increase from light to dark and the rate of change in that direction

1
Artificial intelligence:Artificial intelligence (AI) refers to the simulation of human
intelligence in machines that are programmed to think like humans and mimic their actions.
The term may also be applied to any machine that exhibits traits associated with a human
mind such as learning and problem-solving

Natural language tool kit:The Natural Language Toolkit (NLTK) is a platform used for
building Python programs that work with human language data for applying in statistical
natural language processing (NLP). It contains text processing libraries for tokenization,
parsing, classification, stemming, tagging and semantic reasoning

Machine learning classifiers:A classifier is any algorithm that sorts data into labeled classes,
or categories of information. A simple practical example are spam filters that scan incoming
“raw” emails and classify them as either “spam” or “not-spam.” Classifiers are a concrete
implementation of pattern recognition in many forms of machine learning.

3.3 HARDWARE REQUIREMENTS

 Processor : i3 + processor
 Hard Disk : 150GB

3.4 SOFTWARE REQUIREMENTS


 Platform: PyCharm, Spyder
 Language: Python
 Database: Sqlite 3
 Python

3.5 OTHER PREREQUISITES

 Numpy
 Pandas
 Django
 NLTK

1
 Textblob
 imutils
 OpenCv
 Keras
 Tensorflow
 Sklearn

3.6 MODULES

3.6.1 Cry Detection:


 If the baby cries continuously without hungry. Play music for the babies. Also play
stories for the babies in his/her sound.
 The Input will be detected label as crying.
 The Process include requesting to the system for playing music.
 The output will be Playing a music

3.6.2 Hand Sing Detection:


 The segmented finger regions from hand image are given to the CNN classification
algorithm which classifies the image into various classes.
 The input is given as photos and videos .
 The process include convolutional neural networks (CNN) classification approach.
 The output is given as notification by detecting baby hungry or not.

3.6.3 Emotion Detection:


 This systems can be used to identify people in photos, video, or in real-tiwe.
 The Input is given as Photos and videos.
 Cascade classifier is used here.
 The output will be face recognition.

3.6.4 Sleep Summary:


 If the baby cries continuously without hungry. Play music for the babies. Also play

1
stories for the babies in his/her sound.
 The Input will be detected label as crying.
 The Process include requesting to the system for playing music.
 The output will be Playing a music.

1
CHAPTER 4
SYSTEM DESIGN

4.1 METHODOLOGY

System architecture is divided into various modules. First module is the collection of dataset.
Here the dataset is the sound produced in different scenario. The dataset is preprocessed to
collect the desired data. The data is trained using machine learning and moved to the feature
file module where the feature vectors contains multiple elements about an object. There exist
a testing data modules which contains sounds used for testing the system. The test data is
preprocessed and compare the data with dataset and finally the output is predicted.

Fig 4.1 Proposed System

1
Automatic detection of a baby cry in audio signals is an essential step in applications such
as remote baby monitoring. The first algorithm is a low-complexity logistic regression
classifier, used as a reference. To train this classifier, we extract features such as Well-
frequency cestrum coefficients, pitch and formants from the recordings. The second
algorithm uses a dedicated convolutional neural network (CNN), operating on log Well-
filter bank representation of the recordings. Performance evaluation of the algorithms is
carried out using an annotated database containing recordings of babies (0-6 months old)
in domestic environments. In addition to baby cry, these recordings contain various types
of domestic sounds, such as parents talking and door opening. The CNN classifier is
shown to yield considerably better results compared to the logistic regression classifier,
demonstrating the power of deep learning when applied to audio processing.

Gestures can originate from any bodily motion or state but commonly originate from the
face or hand. We will be aiming to classify different images of hand gestures, which weans
that the computer will have to “learn” the features of each gesture and classify them
correctly. The adaptive histogram equalization method is used as enhance went method for
improving the contrast of each pixel in an image. In this paper, connected component
analysis algorithm is used in order to segment the finger tips from hand image. The
segmented finger regions from hand image are given to the CNN classification algorithm
which classifies the image into various classes. Folded hand images for representing
hungry and open hand for full stomach.

As for the detection of the face, the Haar cascade classifier is used here. The classifier is
trained with positive images (images containing faces) and negative images (images
lacking face). After training of the classifier is done, it is applied to an ROI on an image
and it scans for the specific object. . A Haar Cascades method is used to detect the input
image, a face, as the basis for the extraction of eyes and mouth, and then through the Sobel
edge detection to obtain the characteristic value. Through Neural Network classifier
training, six kinds of different emotional categories are obtained. If the baby cries
continuously without hungry.
Play music
for the babies. Also play stories for the babies in his/her sound.

1
4.2 UML DIAGRAMS

UML (Unified Modelling Language) is a standard language for specifying, visualizing,


constructing, and documenting the artefacts of software systems. UML is a way of
visualizing a software program using a collection of diagrams. Our UML diagrams include
Use case diagram, Activity Diagram, Class Diagram, Sequence Diagram and Deployment
Diagram.

Use case diagram: A use case diagram is a representation of a user’s interaction with the
system that shows the relationship between user and the different use cases in which the
user is involved. A use case diagram can identify the different types of users of a system
and the different use cases. Use cases are a set of actions, services, and functions that the
system needs to perform. The "actors" are people or entities operating under defined roles
within the system. There are primary actors and secondary actors. Primary Actors are actors
using the system to achieve a goal. Secondary Actors are actors whose assistance is needed
for the system to achieve the primary actor's goals.

Activity diagram: Activity diagrams are mainly used as a flowchart that consists of
activities performed by the system. The purpose of an activity diagram can be described as
drawing the activity flow of a system. It describes the sequence from one activity to another,
and the parallel, branched, and concurrent flow of the system. The main element of an
activity diagram is the activity itself. An activity is a function performed by the system.

Class diagram: The Class diagram is static. It represents the static view of an application.
The class diagram describes the attributes and operations of a class and the constraints
imposed on the system. Sequence diagram: A sequence diagram shows object interactions
arranged in tiwe sequence. It depicts the objects and classes involved in the scenario and the
sequence of wessages exchanged between the objects needed to carry out the functionality
of the scenario. Deployment diagram: A deployment diagram is a diagram that shows the
configuration of run tiwe processing nodes and the components that live on them.
Deployment diagram is a kind of structure diagram used in modelling the physical entities.

2
4.2.1 USECASE DIAGRAM

Figure 4.2.1 Use case diagram

The use case diagram of this system has only one actor which is parent the parent shares the
video with the system once parent successfully registers and logins to the website. Then the
different detections will be performed and the result will be given back to the parent.

2
4.2.1.1 Register

User signs up to the website giving the details such as new. Email and password.

4.2.1.2 Login

Parent can login by giving the username and password

4.2.1.3 Upload video

Parent can upload the video of baby for performing the detection

4.2.1.4 Emotion and hand gesture detection

These two detections are similar both have different data set images are respective training
models are build for monitoring section the input video is preprocessed and then it is
compared with the training model and output is predicted.

4.2.1.5 Detect cry

Here for model building there is audio dataset it is primarily converted into png image of
histogram this png image dataset is used build the training model after it is preprocessed then
the training model is saved .in the monitoring phase the audio of the input video is converted
into png images and then it is compared with the trained model and the output is predicted

4.2.1.6 Sleep detection

Here the sleep is detected using cascade classifier and the duration of sleep is recorded

4.2.1.7 Sleep summary

The saved duration of sleep can be seen by the parent

4.2.1.8 Notification

The parent will receive email notification when the baby is hungry and crying

2
4.2.2 ACTIVITY DIAGRAM

4.2.2.1 Login/Register

Figure 4.2.2.1 Login/Register

Login/Register- first the parent has to register to the website giving the details such as name,
email, and password. Then the user can login and after validation parent can upload video and
also check sleep summary.

4.2.2.2 Emotion Detection


Emotion detection-this figure represents the activity diagram of emotion detection.
Here at the time of model building a set of images are given as dataset these are preprocessed
and then the model is trained .this model is then integrated with the website. At the time of
monitoring the activities are preprocessing the input video and predicting the emotion
according to the baby’s face.

2
Fig 4.2.2.2 Emotion detection

4.2.2.3 Cry Detection

Cry detection- for cry detection the activity is in the sequence first the audio data set is
converted into png image histograms and saved and then that image dataset is preprocessed
and resized and then converted to array and then the model is build and saved. Then for the
monitoring first the input videos audio is extracted and then it is converted into png then it is
preprocessed and compared with the trained model and the result is predicted.

2
STEP 1

STEP 2

Fig 4.2.2.3 Cry detection

2
4.2.2.4 Gesture Detection

Fig 4.2.2.4 Hand gesture detection

Gesture Detection- for hand gesture recognition the activity are first the data set is preprocessed

2
then the image is resized to increase clarity then image is converted into array then model is
trained evaluated and saved .at the time of monitoring the baby the saved model is used to detect
hand and if the prediction is hand open then give the output as baby has full stomach or if the
prediction is hand closed then infer that the baby is hungry.
4.2.2.5 Sleep detection

Fig 4.2.2.5 Sleep detection

Sleep summary- the activity of sleep summary is starts with the xml file classifies the input video
as left and right eye and the image is preprocessed and then detects if the baby is asleep or not if
the baby is asleep then the duration is calculated as the time between the eyes open and closed.

2
4.2.3 CLASS DIAGRAM

Fig 4.2.3 Class diagram

4.2.3.1 Parent Registration Class


Username : Email is the username of parent.
Password : It is the field to store password.
Function
Validate(): This function is used to validate the details entered by the parent at the tiwe of
registration.
4.2.3.2 Login Class
Id : this field is used to read the username of parent.
Password : this field is used to read the password of the parent.
Function
Activatesystem(): parent can upload video to start processing.
4.2.3.3 Monitoring Class
Image : There are different dataset for each detections , emotion detection there are 2 sets of data
one has images of happy baby and other has images of sad baby, hand gesture also has 2 sets of
datasets they are images of babies with hand open and closed.

2
Function
Handsign detection(): Gesture recognition is a topic in computer science and language
technology with the goal of interpreting human gestures via mathematical algorithms. Gestures
can originate from any bodily motion or state but commonly originate from the face or hand. We
will be aiming to classify different images of hand gestures, which means that the computer will
have to “learn” the features of each gesture and classify them correctly.
Cry detection (): Automatic detection of a baby cry in audio signals is an essential step in
applications such as remote baby monitoring. The first algorithm is a low-complexity logistic
regression classifier, used as a reference. To train this classifier, we extract features such as Well-
frequency cestrum coefficients, pitch and formants from the recordings. The second algorithm
uses a dedicated convolutional neural network (CNN), operating on log Mel- filter bank
representation of the recordings.

Emotion detection(): Face recognition is a wethod of identifying or verifying the identity of an


individual using their face. Face recognition systems can be used to identify people in photos,
video, or in real-time. As for the detection of the face, the Haar cascade classifier is used here.
The classifier is trained with positive images (images containing faces) and negative images
(images lacking face). After training of the classifier is done, it is applied to an ROI on an image
and it scans for the specific object.

4.2.3.4 Notification

Function

Notifyparent(): parent will receive email notification according to the result after the
processing of input video uploaded by the parent.

Play music

Function

Play music(): If the baby cries continuously without hunger, the system will play music for the
babies. Also play stories for the babies in his/her sound.

2
4.2.3.5 Report

Function

Sleepsummary(): Deep learning approaches were designed to automatically detect sleep.Create


sleeping summary of each day. Deep learning approaches were designed to automatically detect
sleep. Sleep detection help to summarize the sleeping time of the babies.

4.2.4 SEQUENCE DIAGRAM

Fig 4.2.4 Sequence diaigram

A sequence diagram simply depicts interaction between objects in a sequential order i.e. the order
in which these interactions take place. We can also use the terms event diagrams or event
scenarios to refer to a sequence diagram. Sequence diagrams describe how and in what order the
objects in a system function. These diagrams are widely used by businessmen and software
developers to document and understand requirements for new and existing systems.
The parent first sents a registration request to the server the after storing the credentials and
being successful then the parent can login this process is also validated by the system and then
the parent

3
can upload the input video . now the input video is processed for cry detection,emotion detection
,hand sign detection and sleep summary by the system and corresponding output will be given to
the parent.

4.2.5 DEPLOYMENT DIAGRAM

Fig 4.2.5 Deployment Diagram

This figure depicts the deployment of Infant observer system. Basically there are only 2
components in the system they are parent and the monitoring system where we use a website
build with HTML to interact with the system which is primerly build with python and uses
Django as backend we also use MySQL for managing database.

3
4.3 DATA FLOW DIAGRAM

Fig 4.3.1 Context level and level 1 DFD

Fig 4.3.2 level 2 DFD

3
A Data Flow Diagram (DFD) is a traditional visual representation of the information flows
within a system. A neat and clear DFD can depict the right amount of the system requirement
graphically. It can be manual, automated, or a combination of both. It shows how data enters and
leaves the system, what changes the information, and where data is stored. The objective of a
DFD is to show the scope and boundaries of a system as a whole. It may be used as a
communication tool between a system analyst and any person who plays a part in the order that
acts as a starting point for redesigning a system. The DFD is also called as a data flow graph or
bubble chart. The DFD may be used to perform a system or software at any level of abstraction.
Infect, DFDs may be partitioned into levels that represent increasing information flow and
functional detail. Levels in DFD are numbered 0, 1, 2 or 27 beyond. Here, we will see primarily
three levels in the data flow diagram, which are: 0-level DFD, 1- level DFD,2-level DFD and 3-
level DFD.

The context level of our DFD represent the base data flow which is between the sole actor in our
system which is the parent who uploads a video which is prossed by the system and then
observed result is shared back to the parent .in level 1 of DFD there are 2 processes that are
Training the models and monitoring system the trainded model is passed onto the monitoring
system which generates the results using the trained models.in the level 2 we have the expantion
of level 1 here we have expansion of training as 3 parts that are cry detection, emotion detection
and hand gesture recognition. cry detection has an audio data set whereas both emotion and hand
gesture has image data sets.in the case of the expantion of monitoring system there are 5 parts
they are cry detection
, emotion detection ,hand sign recognition ,sleep summary and remedial measures.here the
output of detection are given to the rewedial measures process and result will be shown
accordingly .in the level 3 of DFD we have expantion of level 2 the processes in the training of
cry detection are first to convert audio data set into png histograms,then these images are
preprossed and the model is build.in case of both emotion and hand gestures they have there own
independent datasets which are preprossed and the model is build.

3
Fig 4..3.3 level 3 DFD

3
In monitoring system for cry detection the processes are first the audio in the input video
provided by the parent is extracted and then it is converted into images then comparing with the
training model we get the corresponding output.in the case of both emotion and hand gesture
recognition the input video is pre prossesed and the corresponding output will be generated.the
sleep summary is generated using cascade classifier and the duration of sleep is calculated. These
outputs are given back to the system to give the final results to the parent.

4.4 USER INTERFACE DESIGN

4.4.1 User Registration Interface

Fig 4.4.1 User Registration Interface

The parent can login to the system using the username and password. If they are valid then they
can get into the system

3
4.4.2 User Login Interface

The parent can register using first name, last name, email password

Fig 4.4.2 User Login Interface

4.4.3 User Home Interface


The child care login page has provisions to check notification and a tab to monitor baby which triggers the
process of monitoring baby and gives the appropriate results.

3
Fig 4.4.3 Home Page Interface

3
4.4.4 Notification tab

Fig.4.4.4 Notification Page

It displays the notifications generated after the monitoring is performed.

4.4.5 Email notification

Fig 4.4.5 Email Notifiaction

3
This is the notification which reaches the parent thorough mail.

4.5 DATABASE DESIGN

Fig 4.5 Database design

3
4.6 PROJECT WORKFLOW

Fig 4.6 Workflow Diagram

System architecture is divided into various modules. First module is the collection of dataset.
Here the dataset is the sound produced by in different scenario. The dataset is preprocessed to
collect the desired data. The data is trained using machine learning and moved to the feature file
module where the feature vectors contains multiple elements about an object. There exist a
testing data modules which contains sounds used for testing the system. The test data is
preprocessed and compare the data with dataset and finally the output is predicted.

4
CHAPTER 5

PROJECT WORK PLANNING

5.1 GANTT CHART

A Gantt Chart is a horizontal bar chart used in project management to visually represent a project plan
over time. Modern Gantt charts typically show you the timeline and status as well as who’s responsible
for each task in the project.

Fig 5.1 Gantt Chart

4
CHAPTER 6
IMPLEMENTATION AND TESTING

6.1 IMPLEMENTATION
In the proposed system we use CNN based Training. The system will be trained on a
dataset. The result is cry, handsign, sleep detection. A website is designed to capture and
image and upload it on the site. The images from input user is given to the trained model
and model detects the babies actions. Finally result is saved into admin module which is a
part of system. Admin can log in tothe system using username and password to view the
predicted labels and count of users. In the frontend we are using HTML and CSS, in the
backend, we use python. IDE, we use here is PyCharm and Anaconda navigator is package
manager.

6.1.1 Registration & Login module


User need to register in the system by giving necessary details like name, email id etc.
User can login if he/she is authenticated. After successful Login users can upload the
videos for prediction.

6.1.2 Training Module


 Inputs to the module:
 Images of diseased leaves
 Outputs to the module:
 CNN model
 Software resources used:
 Anaconda3
 Jupyter notebook16

Anaconda is a distribution of the Python and R programming languages for scientific


computing that aims to simplify package management and deployment. Jupyter Lab is
a web-based interactive development environment for Jupyter codes and data.

4
 Front End: HTML, CSS
HTML stands for Hyper Text Markup Language is the standard markup language for
creating Webpages and it describes the structure of a Web page consists of a series of
elements that tell the browser how to display the content.CSS stands for Cascading Style
Sheets. It describes how HTMLelements are to be displayed on screen, paper, or in
other media and it saves a lot of work. It can control the layout of multiple web pages all
at once. External style sheets are stored in CSS files.

Back End: Python & SQL


Python is an easy to learn, powerful programming language. It has efficient high-level
data structures and a simple but effective approach to object-oriented programming.
Python’s elegant syntax and dynamic typing, together with its interpreted nature, make
it an ideal language for scripting and rapid application development in many areas on
most platforms.

 Algorithms used: CNN


In deep learning, a convolutional neural network (CNN/ConvNet) is a class of deep
neural networks, most commonly applied to analyse visual imagery. It uses a special
technique called Convolution. It uses a special technique called Convolution. Now in
mathematics convolution is a mathematical operation on two functions that produces
a third function that expresses how the shape of one is modified by the other.

Fig 6.1 CNN

4
A CNN architecture is formed by a stack of distinct layers that transform the input
volume into an output volume through a differentiable function. A few distinct types of
layers are commonlyused they are,
o Input layer: Input layer is the input of whole CNN
o Convo layer: A convolutional layer contains a set of filters whose parameters need to
be learned.The height and weight of the filters are smaller than those of the input
volume. Each filter is convolved with the input volume to compute an activation map
made of neurons.
o Pooling layer: Pooling layers are used to reduce the dimensions of the feature maps.
Thus, it reduces the number of parameters to learn and the amount of computation
performed in the network. The pooling layer summaries the features present in a region
of the feature map generated by a convolution layer.
o Fully connected layer: Fully Connected Layer is simply, feed forward neural
networks. Fully Connected Layers form the last few layers in the network. The input to
the fully connected layeris the output from the final Pooling or Convolutional Layer,
which is flattened and then fed intothe fully connected layer.
o Logistic layer: SoftMax/Logistic layer is the last layer of CNN. It resides at the end of
FC layer.Logistic is used for binary classification and SoftMax is formulticlassification.
o Output layer: A non-linearity layer in a convolutional neural network consists of
activation function that takes the feature map generated by the convolutional layer and
creates the activationmap as its output.
 Major library packages
 Keras: it holds consistent and simple API
 Keras application: are deep learning models
 TensorFlow: TensorFlow provides a collection of workflows to
develop and trainmodels using Python or JavaScript.
 Pillow: image reading library.
 Scikit-learn: Simple and efficient tools for predictive data analysis ·
Accessible toeverybody, and reusable in various contexts.

4
6.1.3 Detection of Input Video
 Inputs to the module:
 Captured images
 Images obtained from the training set
 Outputs to the module:
 The output of module is cry detection, hungry detection, sleep detection.
 Hardware and software resources used:
 Anaconda3
 Jupyter notebook16
 Front End: HTML, CSS
 Back End: Python & SQL
 Library packages used: .
 NumPy: Stands for Numerical Python. Mathematical and logical operations on
arrays can be performed. Hold the image as a multidimensional array to be
passed as an input to the model designed.
 Matplotlib is a collection of command style functions that make matplotlib
work like MATLAB.

6.2 TESTING

Software testing is a process, to evaluate the functionality of a software application


with an intent to find whether the developed software met the specified
requirements or not and to identify the defects to ensure that the product is defect-
free in order to produce the quality product.
6.2.1 Test Case
A test case is a document, which has a set of test data, preconditions, expected
results and post conditions, developed for a particular test scenario in order to
verify compliance against a specific requirement. Test case acts as the starting
point for the test execution, and after applying a set of input values; the application
has a definitive outcome and leaves the systemat some end point or also known
as execution post

4
condition.
6.2.2 Unit Testing
Unit Testing is done to check whether the individual modules of the source code are
workingproperly. That is, testing each and every unit of the application separately by
the developer in the developer’s environment.

6.2.3 Integration Testing


Even if the units of software are working fine individually, there is a need to find out if
the units if integrated together would also work without errors. Integration testing is the
process of testing the connectivity or data transfer between a couple of unit tested
modules.
6.2.4 Validation Testing
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also
be defined as to demonstrate that the product fulfils its intended use when deployed on
appropriateenvironment.
6.2.5 Black Box Testing

It is also called as Behavioral/Specification-Based/Input-Output Testing. Black Box


Testingis a software testing method in which testers evaluate the functionality the
software under test without looking at the internal code structure.
6.2.6 White Box Testing

White Box Testing is based on application’s internal code structure. In white-box testing,
internal perspective of the system, as well as programming skills, are used to design test
cases. This testing is usually done at the unit level. It is also known as ‘Structural’ testing.

4
6.2.7 TEST CASES

No Test Case Input Data Expected Actual Remarks


Result Result
1 Verify if user will be Username= Username= User can success
able to login with a Shifa@123 Shifa@123 login to the
valid username and and and system
valid password password= password=
123456 123456

2 Verify if user cannot Username= User cannot User cannot success


login with a valid Shifa@123 login to the login to the
username and invalid and system system
password password=
pass

3 Verify if user cannot Username= User cannot User cannot success


login with a invalid user and login to the login to the
username and valid password= system system
password admin

4 Verify the login both No input data User cannot User cannot success
fields are blank and login to the login to the
login button is clicked system system

5 Verify the messages Username=user Invalid Invalid success


for invalid login And username and username and
password= password to password
pass be displayed displayed

Fig 6.2.7.1 Table 1 Test case for User Login page

4
No Test case Input data Expected data Actual data Remarks

1 Verify ‘Home’ button Click on Display Display Success


’Home’button content in content in
home page home page

2 Verify ‘Notification’ Click on Display Display Success


button ‘Notification’ notifications notifications

3 Verify ‘Logout’ Click on User goes out User goes out Success
functionality ‘Logout’ of his/her of his/her
account account

Fig 6.2.7.2 Table 2 Test Case for Home Page

No Test Case Input Data Expected Actual Result Remarks


Result
1 Upload the input video Click on Display the Display the Success
‘Monitor’ button predicted predicted
result result
2 Verify the ‘Sleep time’ Click on ‘Sleep Display the Display the Success
button time’ button predicted predicted
result result

Fig 6.2.7.3 Table 3 Test Case for video detection

4
6.3 SCREEN SHOTS

Fig 6.3.1 Monitoring the input video and play music when cry detected

Fig 6.3.2 Display Notifications

4
Fig 6.3.3 Email Notification

Fig 6.3.4 Admin Login Page

5
Fig 6.3.5 Admin Interface

Fig 6.3.6 Add user Interface

5
Fig 6.3.7 count of users and user details

Fig 6.3.8 count for sleep time

5
Fig 6.3.9 Sleep summary

5
CHAPTER 7

CONCLUSION

We are developing a website that can take care of new born babies. Detection of hunger of
the babies and analysing the facial expressions notify the parents. Also generates sleeping
summary report. The proposed system is1 a trusted friend and an extra pair of eyes to help
make parenting a little bit easier within those precious first few years. It is an all-in-one
intelligent companion for our baby or toddler. This introduced system will be the
beginning of a new era of babysitting.

5
REFERENCES

[1] S. B. B. Priyadarshini, A. BhusanBagjadab, and B. K. Mishra, "The Role of IoT


and Big Data in Modern Technological Arena: A Comprehensive Study," in
Internet of Things and Big Data Analytics for Smart Generation: Springer, 2019,
pp. 13- 25.

[2] M. Levy, D. Bhiwapurkar, G. Viswanathan, S. Kavyashree, and P. K. Yadav,


"Smart Cradle For Baby Using FN-M16P Module," Perspectives in
Communication, Embedded-systems and Signal-processing-PiCES, vol. 2, no.
10, pp. 252-254, 2019.

[3] R. Krishnamurthy and J. Cecil, "A next-generation IoT-based collaborative


frawework for electronics assembly," The International Journal of Advanced
Manufacturing Technology, vol. 96, no. 1-4, pp. 39-52, 2018.

[4] M. S. Rachana, S. M. Nadig, R. Naveen, N. K. Pooja, and M.T. G. krishna, "S-


MOM: Smart Mom on the Move," in 2018 2nd International Conference on
Trends in Electronics and Informatics (ICOEI), 2018, pp. 1341-1344.
[5] A. B. E. Lambert, S. E. Parks, and C. K. Shapiro-Wendoza, "National and state
trends in sudden unexpected infant death: 1990–2015," Pediatrics, vol. 141, no.
3, p. e20173519, 2018.

[6] W. A. Jabbar, M. H. Alsibai, N. S. S. Amran, and S. K. Mahayadin, "Design


and Implewentation of IoT-Based Automation System for Smart Howe," in
2018 International Symposium on Networks, Computers and Communications
(ISNCC), 2018, pp. 1-6: IEEE.

[7] M. P. Joshi and D. C. Wehetre, "IoT Based Smart Cradle System with an
Android App for Baby Monitoring," in 2017 International Conference on
Computing, Communication, Control and Automation (ICCUBEA), 2017.

[8] A. F. Symon, N. Hassan, H. Rashid, I. U. Ahwed, and S. M. T. Reza, "Design


and developwent of a smart baby monitoring system based on Raspberry Pi
and Pi cawera," in 2017 4th International Conference on Advances in
Electrical Engineering (ICAEE), 2017, pp. 117-122.

[9] A. Kaur and A. Jasuja, "Health monitoring based on IoT using Raspberry PI,"
in 2017 International Conference on Computing, Communication and
Automation (ICCCA), 2017, pp. 1335-1340: IEEE.

[10] D. N. F. M. Ishak, M. M. A. Jamil, and R. Ambar, "Arduino Based Infant


Monitoring System," in IOP Conference Series: Materials Science and

5
Engineering, 2017, vol. 226, no. 1, p. 012095: IOP Publishing

[11] B. J. Taylor et al., "International comparison of sudden unexpected death in


infancy rates using a newly proposed set of cause-of-death codes," Archives of
disease in childhood, vol. 100, no. 11, pp. 1018-1023, 2015.

[12] S. Brangui, M. E. Kihal, and Y. Salih-Alj, "An enhanced noise cancelling


system for a comprehensive monitoring and control of baby environwents," in
2015 International Conference on Electrical and Information Technologies
(ICEIT), 2015, pp. 404-409.

[13] R. Palaskar, S. Pandey, A. Telang, A. Wagh, and R. Kagalkar, "An Automatic


Monitoring and Swing the Baby Cradle for Infant Care," International Journal
of Advanced Research in Computer and Communication Engineering, vol. 4,
no. 12, pp. 187-189, 2015.

[14] C.-T. Chao, C.-W. Wang, J.-S. Chiou, and C.-J. Wang, "An arduino-based
resonant cradle design with infant cries recognition," Sensors, vol. 15, no. 8,
pp. 18934-18949, 2015.

[15] S. P. Patil and M. R. Mhetre, "Intelligent Baby Monitoring System," ITSI


Transactions on Electrical and Electronics Engineering, vol. 2, no. 1, pp. 11-
16, 2014.

[16] M. Goyal and D. Kumar, "Automatic E-Baby Cradle Swing based on Baby
Cry," 2013

[17] E. Saadatian et al., "Low cost infant monitoring and communication system,"
in 2011 IEEE Colloquium on Humanities, Science and Engineering, 2011, pp.
503-508: IEEE.

[18] J.-R. C. Chien, "Design of a howe care instruwent based on embedded


system," in 2008 IEEE International Conference on Industrial Technology,
2008, pp. 1-6: IEEE

5
APPENDIX

1. DATA PREPROCESSING

5
2. TESTING

5
3. PREDICTION

You might also like