Finalyearreportadnaan 123

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

Belagavi-590018, Karnataka

A PROJECT REPORT
ON
Currency Detector for Visually Impaired People
Submitted in partial fulfilment of requirements for the award of degree

BACHELOR OF ENGINEERING IN
COMPUTER SCIENCE AND ENGINEERING

Submitted by:

MD ADNAAN NASIR 1SB20CS067


PAWAN KUMAR TIWARI 1SB20CS076
PRATIK PATI 1SB20CS080
SIDHARTH SAMBYAL 1SB20CS100

Under the Guidance of


Dr. Vinola C
Assistant Professor, Department of Computer Science & Engineering

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


SRI SAIRAM COLLEGE OF ENGINEERING ANEKAL, BENGALURU - 562106
ACADEMIC YEAR: 2023-24

1
SRI SAIRAM COLLEGE OF ENGINEERING
ANEKAL, BENGALURU - 562106
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the project work, entitled “Currency Detector for Visually
Impaired People” is a bona-fide work carried out by

MD ADNAAN NASIR 1SB20CS067


PAWAN KUMAR TIWARI 1SB20CS076
PRATIK PATIL 1SB20CS080
SIDHARTH SAMBYAL 1SB20CS100

in partial fulfilment for the award of degree of Bachelor of Engineering in Computer


Science & Engineering of the Visvesvaraya Technological University, Belagavi
during the academic year 2023-24. It is certified that all the corrections/suggestions
indicated for Internal Assessment have been incorporated in the report. The project
report has been approved as it satisfies the academic requirements.

Signature of the Guide Signature of the HOD Signature of the Principal


Dr. Vinola C Dr. Smitha J A Dr. B Shadaksharappa

Name of examiners: Signature with date


1.

2.

2
ACKNOWLEDGEMENT

The satisfaction and euphoria that accompany a successful completion of any task
would be incomplete without the mention of people who made it possible, success is
the epitome of hard work and perseverance, but steadfast of all is encouraging
guidance.

So, with gratitude we acknowledge all those whose guidance and encouragement served
as beacon of light and crowned our effort with success.

We are thankful to our Principal, Dr. B. Shadaksharappa and for his encouragement and
support throughout the project work.

We are also thankful to our beloved HOD, Dr Smitha J. A for her incessant
encouragement & all the help during the project work.

We take this opportunity to thank our Project Coordinator, Dr. Vinola C, Associate
Professor, Dept. of CSE and Prof. Ram Kumar P, Assistant Professor, Dept. of CSE
for their inspirational guidance, valuable suggestions and providing us a chance for
the completion of the Project.

We consider it a privilege and honour to express our sincere gratitude to our guide
Dr. Vinola C, Associate Professor, Dept. of CSE for her valuable guidance
throughout the tenure of this project work, and whose support and encouragement
made this work possible.

It is also a great pleasure to express our deepest gratitude to all the other faculty
members of our department for their cooperation and constructive criticism offered,
which helped us a lot during our project work.

Finally, we would like to thank all our family members and friends whose encouragement
and support was invaluable.

3
SRI SAIRAM COLLEGE OF ENGINEERING
Anekal, Bengaluru – 562106

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DECLARATION

We, MD ADNAAN NASIR, PAWAN KUMAR TIWARI, PRATIK PATIL,


SIDHARTH SAMBYAL hereby declare that the entire work titled “Currency
Detector for Visually Impaired People” embodied in this project report has been
carried out by us during the 8 semester of BE degree at SSCE, Bangalore under the
esteemed guidance of D r . V i n o l a C , Associate Professor, Dept. of CSE, Sri
Sairam College of Engineering, Bengaluru affiliated to Visvesvaraya Technological
University, Belagavi. The work embodied in this dissertation work is original and it
has not been submitted in part or full for any other degree in any University.

MD ADNAAN NASIR 1SB20CS067

PAWAN KUMAR TIWARI 1SB20CS076

PRATIK PATIL 1SB20CS080

SIDHARTH SAMBYAL 1SB20CS100

4
ABSTRACT

The "Currency Detector for Visually Impaired People" project pioneers a


revolutionary solution to the challenges faced by visually impaired individuals in
identifying currency notes. Through the integration of advanced image processing,
machine learning, and Optical Character Recognition (OCR) technologies, this
portable device transforms the landscape of financial accessibility for the visually
impaired. Employing a portable camera, the system captures high-resolution images
of currency notes, which are then subjected to comprehensive analysis. Leveraging
OCR, the device accurately extracts textual information such as denomination and
serial numbers, enhancing the precision of currency detection. Advanced image
processing algorithms complement this by discerning crucial features like color,
size, and intricate patterns unique to each currency denomination. At its core lies a
trained machine learning model, continuously refining its understanding through
iterative learning processes. This model facilitates real-time auditory feedback,
announcing the denomination of the currency to the user, granting them immediate
and independent access to financial information. The project prioritizes accessibility
through user-centric design elements, including tactile feedback mechanisms and
intuitive voice prompts. These features enhance usability and cater to diverse user
preferences, ensuring inclusivity across the visually impaired community.
Furthermore, the system maintains affordability by utilizing cost-effective
components and streamlining the user interface. By democratizing access to
financial independence and autonomy, the project fosters a more equitable future for
the visually impaired community. In summary, the "Currency Detector for Visually
Impaired People" project represents a transformative leap towards financial
inclusivity and empowerment. Through the convergence of technology, empathy,
and human-centered design, this innovative solution heralds a more accessible and
equitable era for all.

5
TABLE OF CONTENT

CHAPTER NO. TITLE OF THE CHAPTER PAGE NO.

1. SYSTEM ANALYSIS............................................................... 8
1.1 Introduction .................................................................. 8
1.2 Literature Survey .......................................................... 9
1.3 Problem Statement ....................................................... 17
1.4 Objective of the Project ................................................ 17
1.5 Existing System & Its Disadvantages .......................... 18
1.6 Proposed System & Its Advantages.............................. 18
2. SYSTEM SPECIFICATIONS ............................................ 20
2.1 Software Requirements ................................................ 20
2.2 Hardware Requirements ............................................... 20
3. SYSTEM DESIGN ............................................................... 22
3.1 System Architecture ..................................................... 22
3.2 Use Case Diagram ........................................................ 24
3.3 Sequence Diagram ........................................................ 26
4. RESULTS AND DISCUSSION ............................................ 28
5. CONCLUSION……............................................................... 30
6. APPENDIX ............................................................................. 31
7. REFERENCE.......................................................................... 35

6
CHAPTER 1

7
CHAPTER 1

1. SYSTEM ANALYSIS

1.1 INTRODUCTION

The initiative aimed at addressing the financial accessibility challenges for individuals with
visual impairments is pioneering. Its primary goal is to empower visually impaired
individuals by providing them with a reliable and efficient tool to independently identify and
differentiate currency notes.

This innovative system leverages cutting-edge technology, including image processing and
machine learning algorithms, to enable real-time currency recognition. A portable device
equipped with a high-resolution camera serves as the core component of the system. Users
can simply point the camera towards a currency note, and the device captures an image for
analysis. The captured image undergoes a sophisticated image processing pipeline to extract
crucial features such as denomination, color, and patterns unique to each currency.

To ensure accuracy and reliability, the initiative involves the training of a machine learning
model using a diverse dataset of currency images. This model is then implemented to
classify and identify different denominations of currency notes. The classification results are
conveyed to the user through audible announcements, providing instant feedback and
allowing visually impaired individuals to confidently manage their finances.

User experience is a key focus of the initiative, with intuitive interfaces integrated into the
device to enhance accessibility. Tactile feedback mechanisms and voice prompts are
integrated to make the system user-friendly and ensure a seamless interaction for individuals
with visual impairments. The goal is not only to provide a practical solution for currency
identification but also to enhance the overall financial independence and inclusion of
visually impaired individuals.

This initiative serves as a testament to the transformative potential of technology in


addressing accessibility challenges and improving the lives of marginalized communities. By
pioneering innovative solutions, we can foster a more inclusive and equitable future where
everyone has equal access to financial independence and autonomy.

8
1.2 Literature survey

Ref [1]

The p r o j e c t is about a currency detection app for visually impaired people, which uses
image processing and machine learning techniques to recognize and classify Indian banknotes.
The app allows the user to speak a command to open the camera, capture an image of the
banknote, and receive an audio feedback of the denomination. The document describes the
proposed method, which consists of two phases: offline and online.
The offline phase involves constructing a dataset of banknote images, while the online phase
involves preprocessing, segmentation, feature extraction, classification, and matching the
input image with the dataset.
The document also discusses the advantages and disadvantages of the app, and provides some
references and websites for further information. The document aims to help the visually
impaired people in monetary transactions and provide them a clear view of the banknote.
The document is a research paper that describes the development of an Android app that can
assist visually impaired people in finding objects and money, as well as providing voice
services1. The app uses computer vision algorithms to detect and recognize objects and money,
and a voice assistant to communicate information to the user.
The app aims to enhance the independence and confidence of the visually impaired, who can
use it to perform daily tasks that require visual acuity. The paper covers topics such as product
detection, money detection, voice assistance, visual impairment, Android Studio, and
applications. The paper also discusses the usage of the app and the results of the tests and
evaluations with the visually impaired.
The paper concludes that the project demonstrates the potential of combining computer vision
and audio technologies to create systems that can help visually impaired people in their daily
lives and improve their quality of life.
The paper presents a currency detector Android application for visually impaired people. The
application uses image processing techniques and TensorFlow to recognize the currency notes
using a mobile camera. The application aims to help visually impaired people to perform
transactions through money without depending on others.
The paper first introduces the problem of currency identification for visually impaired people,
who are partially sighted or completely blind. The paper states that visually impaired people
face many difficulties in their daily life, including differentiating between the notes. The paper

9
also mentions that banks and other institutions have expensive hardware machines that can
detect fake notes, but these machines are not handy or cost-efficient for personal use.
The paper then reviews the existing literature on currency recognition systems, which use
various methods such as image processing, machine learning, deep learning, and speech
synthesis. The paper compares the advantages and disadvantages of different approaches, and
identifies the research gap in developing a portable, accurate, and user-friendly system for
Indian currency notes.
The paper then proposes a currency detector Android application, which uses the TensorFlow
framework to train a convolutional neural network (CNN) model on a dataset of Indian
currency notes. The paper describes the architecture and parameters of the CNN model, and
the steps involved in preprocessing, training, testing, and deploying the model. The paper also
explains how the application uses the mobile camera to capture the image of the note, and how
it uses text-to-speech (TTS) to announce the result to the user.
The paper then evaluates the performance of the proposed system, and reports the accuracy,
precision, recall, and F1-score of the CNN model on the test dataset. The paper also conducts
a user satisfaction survey with 20 visually impaired people, and analyzes their feedback on the
usability, reliability, and efficiency of the application. The paper claims that the proposed
system achieves high accuracy and user satisfaction, and can work in different lighting
conditions and orientations.
The paper concludes by summarizing the main contributions and findings of the paper, and
suggesting some future enhancements such as adding more features, supporting more
languages, and detecting fake notes. The paper also acknowledges the limitations and
challenges of the proposed system, such as the dependency on internet connection, the need
for regular updates, and the possibility of errors due to noise or occlusion.

Advantages:
Comprehensive Overview: The paragraph provides a comprehensive overview of a currency
detection app for visually impaired people, covering its development, methodology, and
evaluation.

Clear Structure: The paragraph is well-structured, dividing the content into sections such as
the app's purpose, the proposed method, advantages and disadvantages, and conclusions,
facilitating readability.

10
Disadvantages:

Limited Detail on Disadvantages: While the paragraph mentions the disadvantages of the
proposed system, such as dependency on internet connection and potential errors, it lacks in-
depth exploration or discussion of these drawbacks.

Technical Jargon: The inclusion of technical terms and concepts, such as TensorFlow and
CNN, may make the paragraph challenging for readers who are not familiar with these
technologies.

Ref [2]

In this paper, a mobile system for currency recognition that recognizes Indian currency in
different views and scales is introduced. A dataset for Indian currency is developed on an
Android platform. Following this, an automatic mobile recognition system is applied using a
smartphone on the dataset, employing the scale-invariant feature transform (SIFT) algorithm.
SIFT has been utilized as the most robust and efficient local invariant feature descriptor.
Color, which provides significant information and crucial values in the object description
process and matching tasks, is incorporated into the system. Many objects cannot be
classified correctly without their color features. One of the most important problems faced by
visually impaired people is currency identification, especially for currency notes. In this
system, a simple currency recognition system applied on Indian banknotes is introduced.

Advantages:

Mobile System Accessibility: The use of a mobile system enhances accessibility, allowing
users to perform currency recognition conveniently on a widely available device—
smartphones.

Dataset Development: The creation of a specific dataset for Indian currency on the Android
platform suggests a tailored approach, optimizing the recognition system for the local currency
and mobile environment.

11
Disadvantages:

Limited Information on System Performance: The paper lacks details on the performance
metrics and evaluation results of the introduced currency recognition system. Information on
accuracy, precision, and recall would provide a clearer understanding of the system's
effectiveness.

Generalization to Other Currencies: The focus on Indian currency may limit the system's
applicability to other currencies. The paper does not discuss the adaptability of the system to
recognize currencies from different countries

Ref [3]
In the current world of cheating and other frauds, leading lives is found to be very difficult
by people. The same is even worse in the case of Blind or visually challenged. They face
many more challenges in their daily life, especially while dealing with currency and other
money related issues. In order to help blind people, a project is being developed that helps
them identify the denomination of the currency, as there is no brail mark on the currency
note. Counterfeit note or fake note is another difficult task to identify, for both healthy and
blind person. So, in addition to it, a fake note detection system is being incorporated, which
helps every citizen from verge of being cheated.

Advantages:

Enhanced Accessibility: The project facilitates greater independence and financial inclusion
for visually impaired individuals by providing them with a reliable tool to identify currency
denominations. This fosters their autonomy in managing finances and reduces dependence on
others for assistance.

Fraud Prevention: The incorporation of a fake note detection system benefits not only
visually impaired individuals. This contributes to maintaining the integrity of the financial
system and safeguards individuals from financial losses.

12
Disadvantages:

Technical Challenges: Developing a reliable system for both currency denomination


recognition and counterfeit detection may pose technical challenges, requiring advanced and
accurate technologies to ensure effectiveness.

Affordability and Accessibility: The success of the project may be hindered if the
technology is not affordable or easily accessible. Ensuring widespread availability is crucial
to maximize the impact of the solution.

Ref [4]
In this paper, a mobile system for currency recognition is introduced, capable of recognizing
Indian currency in different views and scales. A dataset for Indian currency is developed on
an Android Platform. Subsequently, an automatic mobile recognition system is applied
using a smartphone on the dataset, employing the scale-invariant feature transform (SIFT)
algorithm. SIFT has been developed to be the most robust and efficient local invariant
feature descriptor. Significant information and important values in the object description
process and matching tasks are provided by color. Many objects cannot be classified
correctly without their color features.

Advantages:

Mobile System Accessibility: The introduction of a mobile system for currency


recognition enhances accessibility, allowing users to utilize widely available smartphones
for currency identification.

Tailored Dataset Development: The creation of a specific dataset for Indian currency on the
Android platform indicates a tailored approach, optimizing the recognition system for the
local currency and mobile environment.

Disadvantages:
Limited Information on System Performance: The paper lacks details on the performance
metrics and evaluation results of the introduced currency recognition system. Information on
accuracy, precision, and recall would provide a clearer understanding of the system's
effectiveness.

13
Dependency on Smartphone Technology: While leveraging smartphones for currency
recognition is practical, it also introduces a potential limitation in terms of dependency on the
availability and capabilities of the smartphone, which may vary among users..

Ref [5]

The difficulties in movement and other daily activities can be created by the partial or
complete loss of vision. People with such problems need to have objects around them
recognized to support navigation and avoid obstacles. A system that recognizes objects and
provides voice response for accessibility is proposed as the solution. Object localization
and classification are utilized in the simplest way. A deep learning-based approach is
employed, utilizing CNN to perform end-to-end unsupervised object detection. The
proposed system aims to be served as an object detector.

Advantages:

Enhanced Accessibility for Visually Impaired Individuals: The proposed system addresses
the challenges faced by individuals with visual impairments by providing an object
recognition and currency detection system with voice feedback, enhancing their ability to

navigate and interact with their surroundings.

Deep Learning-Based Approach: The utilization of a deep learning-based approach,


particularly Convolutional Neural Networks (CNN) and MobileNet, ensures advanced and
accurate object detection, contributing to the effectiveness of the system

Disadvantages:

Technical Complexity: Building and implementing a deep learning model using Tensorflow
and Keras may pose technical complexities, and users may require assistance in setting up
and maintaining the system.

Dependency on Pretrained Network: The reliance on a pretrained network (MobileNet)


could limit the adaptability of the system to specific scenarios, and it might not perform
optimally in all environments or for detecting diverse objects.

14
Ref [6]

Despite the quick expanding utilization of Master cards and other electronic types of payment,
money is still broadly utilized for ordinary exchanges because of its convenience. However,
visually impaired people may suffer from knowing each currency paper apart. Currency
Recognition Systems (CRS) can be used to help blind and visually impaired people who suffer
from monetary transactions. In this paper, a Currency Recognition System based on Oriented
FAST and rotatedYoloV3 algorithm is proposed. The proposed system is applied to Indian
paper currencies including six kinds of currency papers. In the proposed work, a system will
be developed to detect currency for Indian Notes. First, the input of the given image is taken
and the given image is pre-processed and converted into the grayscale image. After pre-
processing, the Sobel algorithm is applied for the extraction of the inner as well as the outer
edges of the image. Clustering will be done using the YOLO V3 algorithm. In which it forms
the clustering of feature one by one. After that, the input image is recognized as a 200, 500, or
2000 and the features of the image are compared and classified as 200, 500, 2000, or not with
the help of the YOLO V3 algorithm.

Advantages:
Addressing Currency Identification Challenges: The proposed Currency Recognition
System (CRS) specifically targets the challenges faced by visually impaired individuals in
identifying different currency papers, providing a solution for more inclusive monetary
transactions

Comprehensive Image Processing: The system employs a series of image processing


techniques, including grayscale conversion, Sobel algorithm application, and YOLO V3
clustering, enhancing the robustness and accuracy of currency detection.

Disadvantages:

Algorithm Complexity: The utilization of advanced algorithms, while beneficial for


accuracy, may introduce complexity in terms of system implementation, maintenance, and
potential challenges for users unfamiliar with technical details.

Dependency on Image Quality: The effectiveness of the proposed system may be


contingent on the quality of input images. Poor image quality or variations in lighting
conditions could impact the accuracy of currency detection.

15
Ref [7]

Money related transactions are an important part of our day-to-day lives. Along with
technology, the banking sector is also getting modernized and explored. In spite of the
widespread usage of ATMs, Credit/Debit Cards, and other digital modes of payment such as
Google Pay, Paytm, and Phone Pay, money is still widely used for most daily transactions due
to its convenience. Currency recognition or bank-note recognition is a process of identifying
the denominational value of a currency. It is a simple and straightforward task for normal
human beings, but if visually challenged people are considered, currency recognition becomes
a challenging task. Visually handicapped people have a difficult time distinguishing between
different cash denominations. Even though unique symbols are embossed on different
currencies in India, the task is still too difficult and time-consuming for the blind. This brings
a deep need for automatic currency recognition systems. So, our paper is aimed at studying
the systems in order to help the visually challenged or impaired people; so that they can
differentiate between various types of Indian currencies through the implementation of image
processing techniques. The study aims to investigate different techniques for recognizing
Indian rupee banknotes. The proposed work extracts different and distinctive properties of
Indian currency notes, few of them are the central number, RBI logo, color band, and special
symbols or marks for visually impaired, and applies algorithms designed for the detection of
each and every specific feature. From our work, visually impaired people will be capable of
recognizing different types of Indian Currencies while their monetary transactions, so that
they lead their life independently both socially and financially.

Advantages:

Focus on Independence and Inclusivity: The proposed work seeks to empower visually
impaired people by enabling them to independently recognize different Indian currency
denominations during their monetary transactions, contributing to both social and financial
independence.

Utilization of Advanced Technologies: The mention of image processing techniques and


algorithms demonstrates a commitment to utilizing advanced technologies for currency
recognition, potentially leading to a more accurate and efficient system.

16
Disadvantages:

Technical Implementation Challenges: The paper mentions the extraction of various features
and the application of algorithms, which might pose challenges in terms of technical
implementation, especially for users who are not familiar with image processing techniques.

Dependence on Feature Extraction: The accuracy of the proposed system may be contingent
on the successful extraction of distinctive features, and variations in currency conditions or
quality could impact the recognition performance.

1.3 Problem Statement

The "Currency Detector for Visually Impaired People" project addresses the critical challenge
faced by individuals with visual impairments in independently identifying and distinguishing
currency notes. Visually impaired individuals often encounter difficulties in discerning
denominations, colors, and patterns on currency, impeding their ability to manage finances
autonomously. This project aims to bridge this accessibility gap by developing a portable
device equipped with image processing and machine learning capabilities. The device's
purpose is to capture and analyze currency images, providing real-time auditory feedback on
denominations, empowering visually impaired individuals to confidently and independently
engage in financial transactions and activities

1.4 Objective of the Project

The primary objective of the "Currency Detector for Visually Impaired People" project is to
develop a portable and cost-effective device that enables visually impaired individuals to
independently recognize and distinguish currency notes. Leveraging advanced image
processing and machine learning, the system aims to provide real-time auditory feedback on
denominations, enhancing financial inclusivity. The project seeks to create a user-friendly
and affordable solution to empower individuals with visual impairments, promoting
autonomy in managing their financial transactions and fostering a more inclusive society.

17
1.5 Existing System & Its Disadvantages

The existing systems for currency recognition typically rely on smartphone applications or
specialized devices that may not fully cater to the needs of visually impaired individuals.
While some smartphone apps offer currency recognition features, they often require an
internet connection for processing, which can be inconvenient in certain situations.
Additionally, these apps may lack robustness and accuracy, leading to unreliable
identification of currency denominations. Specialized devices designed for currency
recognition may be expensive and complex to operate, further limiting accessibility for
visually impaired users. Moreover, these devices may not offer the same level of real-time
feedback and user interaction as the proposed system, potentially hindering the user
experience and independence of visually impaired individuals in managing their finances.

1.6 Proposed System & Its Advantages

The proposed system, "Currency Detector for Visually Impaired People," offers several
advantages over existing solutions. Firstly, its standalone device design means it doesn't
require internet connectivity or reliance on smartphones, providing greater independence for
visually impaired users. Secondly, the incorporation of advanced image processing and
machine learning algorithms ensures real-time and accurate identification of currency notes,
improving efficiency and reliability. Additionally, the user-friendly interface with tactile
feedback and voice prompts enhances accessibility and usability for visually impaired
individuals. Furthermore, the system's focus on affordability aims to make it accessible to a
wider range of users, addressing financial barriers to access. Overall, this innovative solution
empowers visually impaired individuals to confidently manage their finances, promoting
independence and inclusion in financial transactions.

18
CHAPTER 2

20
CHAPTER 2
2. SYSTEM REQUIREMENT SPECIFICATION

2.1 Software Requirements

1. Cardano blockchain- Provides comprehensive and innovative solution to address the


challenges faced by visually impaired individuals in identifying currency
denominations.
2. Image Processing Software: Software tools such as OpenCV (Open Source Computer
Vision Library) or TensorFlow for implementing image processing algorithms.
3. Machine Learning Framework: Frameworks like TensorFlow or PyTorch for developing
and training machine learning models for currency recognition.
4. Programming Language: Proficiency in programming languages such as Python, C++,
or Java for software development.
5. Audio Feedback System: Software for generating audible feedback on currency
denominations, such as text-to-speech libraries or audio processing tools.
6. User Interface Development: Tools for developing a user-friendly interface with
tactile feedback and voice prompts, such as GUI frameworks or accessibility libraries.

2.2 Hardware Requirements

1. Camera: A high-resolution camera or image sensor for capturing images of currency


notes.
2.Processor: Sufficiently powerful processor for running image processing and machine
learning algorithms in real-time.
3.Memory: Adequate RAM and storage space for storing images, trained models, and
application data.
4.Speaker: An audio output device for providing audible feedback to the user.
5.Microphone (optional): A microphone for capturing voice commands or additional user
input, depending on the system design.
6.Battery or Power Source: A reliable power source, such as a rechargeable battery, to
ensure portability and continuous operation of the device.

21
CHAPTER 3

22
CHAPTER 3
3. SYSTEM DESIGN

3.1 System Architecture


Block diagram of a system that uses a camera to help blind people identify Indian rupee notes.
The system works as follows:

1. A camera captures an image of a rupee note.

2. The image is pre-processed to remove noise and improve the quality of the image.

3. The image is then segmented and features are extracted from it. These features may include
the color of the note, the size of the note, and the presence of certain geometric shapes.

4. The extracted features are used to match the note to a database of known rupee notes.

1. Image Acquisition:
• Camera options: Different types of cameras could be used, like a simple webcam, a
smartphone camera, or even a dedicated IR camera for better lowlight performance.
• Lighting control: Consistent lighting is crucial for accurate image capture. Using
built-in LEDs or external lighting may be necessary.
1. Image Preprocessing:
• Noise reduction: Techniques like filtering and smoothing can remove background
noise and improve image clarity.
• Cropping and alignment: The note might be partially captured or tilted. Algorithms
can crop and align the image for further processing.
• Colour correction: Lighting variations can affect colour accuracy. Colour correction
algorithms can adjust for consistent representation.
1. Image Segmentation and Feature Extraction:
• Segmentation: This separates the note from the background and identifies specific
regions of interest, like the watermark, security thread, or prominent landmarks.
• Feature extraction: Relevant visual features are extracted from each region. These
might include colour values, texture patterns, geometric shapes, or specific text
elements.
1. Feature Matching and Recognition:
• Database of features: A pre-trained database containing representative features for
each rupee denomination is crucial. This database is created during the system's
training phase.

22
• Machine learning algorithms: Algorithms like K-Nearest Neighbors, Support Vector
Machines, or Convolutional Neural Networks (CNNs) can be used to compare
extracted features with the database and identify the most likely denomination match.
5. Output and User Interaction:
• Voice output: The system delivers a clear and concise voice message to the user,
announcing the identified denomination.
• Additional functionalities: Advanced systems might offer features like counting
multiple notes, providing additional security checks (like detecting counterfeit notes),
or integrating with voice assistants for further user interaction.
Challenges and Considerations:
• Accuracy and reliability: Maintaining high accuracy and minimizing
misidentification is crucial for user trust and confidence.
• Accessibility and usability: The system should be user-friendly for people with
various levels of vision impairment and technical skills.
• Cost and portability: Designing an affordable and portable solution ensures wider
accessibility and adoption.
Future advancements:
• Haptic feedback: Integrating tactile feedback for texture recognition might offer
additional information and sensory cues.
• Multimodal approach: Combining camera inputs with other sensors like near-field
communication (NFC) tags on notes could provide redundant and robust
identification.
I hope this expanded explanation provides a more comprehensive understanding of the
system and its potential benefits for aiding visually impaired individuals.

23
FIGURE:1 (System Architecture)

3.2 Use case diagram


Sure, I can explain the image you sent me. It appears to be a flowchart of a video processing
system, though it lacks some details and clarity. Here's my interpretation based on what I can
see:

1. Camera Capture:
The process starts with capturing an object using a camera. This could be any video source,
not necessarily just a physical camera.

2. Object Detection:
Next, the system attempts to detect the presence of an object within the captured scene. This
likely involves image recognition algorithms to differentiate the object of interest from the
background.

3. Object Pattern Detection:


If an object is detected, the system then tries to identify specific patterns within the object.
This could involve analyzing the object's shape, texture, or other visual features to understand
its characteristics.
4. Feature Extraction:

24
From the detected object and its patterns, the system extracts specific features. These features
could be numerical values or descriptive attributes that capture the essence of the object's
visual properties.

5. Object Detection (Cloud Storage Database):


This step is a bit unclear. It seems to indicate that the extracted features are compared to a
database stored in the cloud. This comparison would likely happen through machine learning
algorithms to determine if the object matches previously known objects in the database.

6. Matching: Yes/No Branch:


The comparison leads to a yes/no decision. If the features match a known object in the
database, the process proceeds to the next step. If not, the object remains unidentified.

7. Producing Audio Output of Recognized Text:


If the object is recognized, the system generates an audio output describing the object. This
could be a simple voice announcement or a more detailed description depending on the
system's capabilities.

FIGURE:2(Use case diagram)

25
3.3 Sequence Diagram

FIGURE:3(Sequence Diagram)

26
CHAPTER 4

27
CHAPTER 4

RESULTS AND SAMPLE OUTPUT

Fig:

28
CHAPTER 5

29
CHAPTER 5
CONCLUSION

In conclusion, the "Currency Detector for Visually Impaired People" project marks a
significant technological advancement aimed at improving the lives of individuals with
visual impairments. Equipped with advanced image processing and machine learning
capabilities, this project addresses the challenges encountered by visually impaired
individuals in managing their finances independently. The innovative solution overcomes the
limitations of existing systems by providing real-time, accurate, and affordable currency
recognition.

A key strength of this project lies in its user-friendly design, which prioritizes accessibility
for visually impaired users. The device features intuitive interfaces with tactile feedback and
voice prompts, ensuring ease of use and empowering users to navigate financial transactions
confidently. By eliminating the need for manual assistance or reliance on internet-connected
smartphones, the device promotes greater independence and autonomy for visually impaired
individuals in managing their finances.

Furthermore, the project emphasizes affordability, making the device accessible to users
regardless of their financial means. This commitment to inclusivity aligns with the broader
goal of fostering financial independence and empowerment among visually impaired
individuals, contributing to a more equitable society.

Additionally, the "Currency Detector for Visually Impaired People" project holds promise
for making a meaningful impact beyond individual users. By raising awareness of the
challenges faced by visually impaired individuals in accessing financial services, the project
advocates for greater inclusivity and accessibility in the design of future technologies.

In summary, through its innovative technology, user-centric design, and commitment to


inclusivity, the "Currency Detector for Visually Impaired People" project represents a
significant step towards enhancing the lives of visually impaired individuals and promoting
greater financial independence and inclusion for all.

31
APPENDIX

32
SAMPLE CODE
Android App:

package com.example.muhammadali.note;
import android.annotation.SuppressLint;
import android.content.Context;
import android.content.Intent;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.speech.tts.TextToSpeech;
import android.view.GestureDetector;
import android.view.Menu;import android.view.MenuItem;
import android.view.MotionEvent;
import android.widget.Toast;
Import com.example.muhammadali.note.camera.CameraActivity;
import java.util.Locale;import
androidx.appcompat.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity
{
private TextToSpeech myTTS;

private Handler handler; // handler for thread communication


private int CAM_CODE = 875; // activity result code
private Context context;//global context instance
private GestureDetector detector;//gesture detector for handling taps
@Override protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main); context = this;
handler = new Handler(); new
Thread(this::initializeTextToSpeech).start();//thread to handle TTS
events addGestures();//init Gestures }
@SuppressLint("ClickableViewAccessibility")
private void addGestures() { //init gesture detector detector =
new GestureDetector(context, new
GestureDetector.SimpleOnGestureListener()
{ @Override
public boolean onDoubleTap(MotionEvent e)
{ return true;//should return true to listen for event }
});
detector.setOnDoubleTapListener(new
GestureDetector.OnDoubleTapListener() { @Override
public boolean onSingleTapConfirmed(MotionEvent motionEvent) {
//launch camera startActivityForResult(new Intent(context,
CameraActivity.class), CAM_CODE); return true;//should
return true to listen for event } @Override public
boolean onDoubleTap(MotionEvent motionEvent) { return
true; //should return true to listen for event }
@Override public boolean onDoubleTapEvent(MotionEvent
motionEvent) { //add action on double tap gesture if
(!myTTS.isSpeaking())//if TTF is free
33
private void initializeTextToSpeech()
{
myTTS = new TextToSpeech(this, i ->
{ if (myTTS.getEngines().size() == 0) {
handler.post(() -> {
Toast.makeText(MainActivity.this, "There isn't TTS engine on your device",
Toast.LENGTH_LONG).show();
});
} else { myTTS.setLanguage(Locale.getDefault());
myTTS.setSpeechRate(-10);
speak("I m ready, tap on screen for note detection ");
}
}); }
@Override protected void onResume() {
super.onResume();
System.out.println("Resume");
if (myTTS != null && !myTTS.isSpeaking())
speak("I m ready, tap on screen for note detection");
} private void speak(String message) {
if (Build.VERSION.SDK_INT >= 24) { myTTS.speak(message,
TextToSpeech.QUEUE_FLUSH, null, null);
} else { if (!myTTS.isSpeaking())
myTTS.speak(message, TextToSpeech.QUEUE_FLUSH, null); } }
@Override public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu_main, menu); return true;

}
@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == CAM_CODE) { // user is back from camera activity
String yourMSG = data.getStringExtra("MSG");
System.out.println(yourMSG); speak("Tap for note detection ");
}
}}

34
BIBLIOGRAPHY

35
References

1. Rahamath Nisha .M Mrs.A.S.R Sulthana 3 rd Bsc Assistant professor Department of


Information technology Department of Information technology.” CURRENCY
DETECTION APP FOR VISUALLY IMPAIRED”.
2. Mr. Chandrashekhar Mankar1 , Anand Agrawal2 , Vinita Tiwari3 , Nikita Shrinath4 ,
Himanshu Jamwal5 , Priya Diwnale.” Currency Detector Android Application for
Visually Impaired People”.
3. Sri Sai Krishnaa R C1 , Mukesh Kanna G2 , Sarath Kumar D3 , Mrs. Shimona S4
1.2.3Student, Computer Science and Engineering, Agni College of Technology,
Chennai - 600 130, Tamil Nadu, India.” DETECTION AND VOICE ASSISTANCE
FOR VISUALLY IMPAIRED”.
4. Snehal Saraf,Vrushali Sindhikar,Ankita Sonawane,Shamali Thakare 1 Snehal Saraf,
Information Technology, MCOERC, Maharashtra, India 2 Vrushali Sindhikar,
Information Technology, MCOERC, Maharashtra, India 3 Ankita Sonawane,
Information Technology, MCOERC, Maharashtra, India 4 Shamali Thakare,
Information Technology, MCOERC, Maharashtra, India.” Currency Recognition
System For Visually Impaired”
5. “Detecting Currency Notes For Visually Challenged People Using Machine Learning”
Yogitha. R, Kalaiarasi. G, Aishwarya. R, Maheswari. M, Selvi. M Department of
Computer Science and Engineering, Sathyabama Institute of Science and Technology.
6. “ Personalized End-to-End Mandarin Speech Synthesis using Small-sized Corpus” -
ChenHan Yuan
7. “Image Authentication based on Q-FMT via Blind Geometric Correction” - Ram
Kumar Karsh.
8. “Saliency Consistency-Based Image Re- Colorization for Color Blindness” - Jinjiang
Li.
9. “Edge Sensitive Unsupervised Image-to-Image Translation” - Ibrahim Batuhan
Akkaya.

36

You might also like