Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

PROPOSAL FOR SEAMLESS AI:

EMPOWERING INDEPENDENCE : VIRTUAL


ASSISTANT FOR THE DIFFERENTLY ABLED

Introduction:-
The proposed project aims to develop a machine learning model that can recognize sign
language gestures and convert them into text and audio. This will enable people who cannot
speak or listen to effectively convey their feelings and thoughts to normal people, empowering
them and making them more independent.
This model can significantly improve the lives of people with disabilities. It can help them
communicate more effectively with non-signers, enabling greater participation in various
social, educational, and professional settings. This can lead to increased independence and
empowerment for people with disabilities.
In addition, the proposed model can be particularly helpful for adolescents. The ability to
communicate effectively can help adolescents with disabilities build stronger relationships
with their peers, develop a sense of belonging, and improve their self-esteem.
Overall, the ML model has the potential to revolutionize the way people with disabilities
communicate and interact with the world around them. It can help them overcome
communication barriers, increase their independence, and improve their quality of life.

Target beneficiaries: -
The Target beneficiaries of this project are people who have difficulty in communicating with
others due to their inability to speak and listen. This includes people who are hard of
hearing, mute, or have aphasia. The project is designed to provide a platform for these
individuals to communicate with others effectively. Also in the future other functionalities
such as Tactile Feedback and Object Detection can be integrated with this model such that it
can be beneficial for the people who are visually impaired.

Implementation:-
 Data Collection: Collection of a large dataset of sign language videos with
corresponding text and audio translations. We will make of use publicly available
datasets and also create our own dataset by recording sign language videos. The
approximate cost for this is INR 50000 –INR 60000
 Data Preprocessing: We will preprocess the collected data by segmenting the videos
into individual signs, normalizing the lighting conditions, and removing the
background noise. The cost of this step is negligible.

 Feature Extraction: Extracting features from the preprocessed data using techniques
such as optical flow, hand tracking, and pose estimation. The cost of this step is
negligible.

 Model Training: Training a deep learning model such as a convolutional neural


network (CNN) or a recurrent neural network (RNN) on the extracted features to
predict the corresponding text and audio translations. The cost of this step depends
on the size of the dataset and the complexity of the model architecture. We will use
cloud-based services such as Amazon Web Services and Google Cloud Platform to
train our model, which can cost around INR 50000rs- INR 60000 depending on the
size of the dataset and the complexity of the model architecture.

 Model Optimization: Optimizing the trained model for low latency and high accuracy
by using techniques such as pruning, quantization, and compression. The cost of this
step is negligible.

 Hardware Integration: Integrating the optimized model with the smart glass
hardware by using a low-power processor such as Raspberry Pi or Arduino. The cost
of this step depends on the hardware components and the complexity of the
integration. Expected cost for this process is around INR 50000 to INR 60000 on
hardware components and development boards.

 Real-time Inference: Performing real-time inference on the captured sign language


videos using the integrated model to generate the corresponding text and audio
translations. The cost of this step is negligible.

Flowchart:-
Device cost:-
REQUIREMENTS COST
Data Collection Rs.50000 – Rs.60000
Model Training Rs.50000 – Rs.60000
Hardware Integration Rs.50000 – Rs.60000
TOTAL Rs.150000 – Rs.180000

The total expenditure for developing the entire model would range between INR 150,000
and INR 180,000.
This includes the cost of the data collection, computer servers rented for training the model
with huge amount of data, microcontroller, and hardware components required for the
project for the integration with the smart AI glass.

Conclusion :-
The proposed project has the potential to make a significant impact on the lives of people
who are deaf or hard of hearing. By enabling them to communicate more effectively, the
project will help them to lead a more independent life. We request funding and technical
support from the government to bring this project to real-life use case.

You might also like