Professional Documents
Culture Documents
Blind assistance system.docx
Blind assistance system.docx
In [9] The paper presents a hardware architecture for 3D-printed intelligent cane has a microcontroller with
object detection using an AdaBoost learning multiple sensors and a Bluetooth module that
algorithm with Haar-like features as weak classifiers. analyzes the environment and guides the visually
The proposed partially parallel execution model impaired user. The mobile application acts as an
exploits the cascade structure of classifiers, assigns interface between the cap, cane, and user, offering
more resources to frequently used classifiers, and virtual navigation to support visually impaired
individuals in their movements. This system has the
potential to significantly improve the independence
and mobility of visually impaired people. However, a
potential drawback is that the setup may require a
significant amount of hardware which could be
challenging to implement.
3. Image Feature Extraction: In this stage, we The proposed system aims to provide real-time object
will use methods to extract image features detection for visually impaired individuals through an
by analyzing the pixels in the image. Android app that captures frames as shown in fig2.
The frames are then sent to a networked server
4. Image Classification: The image
running on a laptop, which performs all the necessary
classification methods will be applied to
computations. This approach is taken to leverage the
distinguish between contaminated and safe
processing power of the laptop to ensure that the
areas based on the extracted features.
system can accurately identify objects in real time.
5. Outcome: This step involves presenting the
The networked server on the laptop uses a pre-trained
final result of the detection process.
Single Shot Detector (SSD) identification model,
System Flow Chart: which has been trained on Common Objects in
Context (COCO) datasets to identify the output class.
Once the object class has been identified, it is
evaluated using an accuracy metric to ensure that the
system can accurately identify a wide range of
objects.
5. Determine the bias class of each feature. These results suggest that the proposed blind
assistance system has the potential to be a more
6. Generate the feature map and move on to the reliable and effective solution for assisting visually
forward pass input layer. impaired individuals in navigating their surroundings.
With further development and refinement, this
7. Evaluate the convolution kernels in a feature
technology could greatly improve the quality of life
pattern.
for those who are visually impaired, by increasing
8. Create the sub-sample layer and feature their independence and mobility.
value.
Conclusion:
9. Backpropagate the input deviation of the kth
The primary objective of the proposed system is to
neuron in the output layer.
assist visually impaired individuals in perceiving their
10. Finally, present the chosen feature and the surroundings, enabling them to navigate independently
classification results. and avoid obstacles. The system is designed to offer a
practical and efficient solution that swiftly and
accurately detects objects in the user's immediate
V. RESULTS AND DISCUSSION
vicinity, whether indoors or outdoors. It can identify
various objects and provide relevant information
through headphones or speakers. The system's
effectiveness in detecting objects is evaluated in three
different environments: indoor, outdoor, and beyond
10 meters from the camera. Overall, the system can
detect and provide audio information about objects in
the user's immediate surroundings, promoting their
mobility and self-reliance.
Fig3: Performance analysis of the model
According to the results shown in Figure 3, it is
evident that the proposed blind assistance system References:
performs better than the currently existing models. The [1] R. R. Varghese, P. M. Jacob, M. Shaji, A. R, E. S.
accuracy of the currently existing model ranges from John, and S. B. Philip, "An Intelligent Voice
60-80%, while our proposed system promises an Assistance System for Visually Impaired using
accuracy of greater than 80%. Moreover, our system Deep Learning," 2022 International Conference on
demonstrates an increase in recall from 65% to 85%, Decision Aid Sciences and Applications (DASA),
Chiangrai, Thailand, 2022, pp. 449-453, doi: [8] Rui Jiang, Qian Lin Li, “Let Blind People See:
10.1109/DASA54658.2022.9765171. Real-Time Visual Recognition with Results
[2] S. C. Jakka, Y. V. Sai, J. A and V. A. M. A, "Blind Converted to 3D Audio”, Proc. International
Assistance System using TensorFlow," 2022 3rd Conference on Computer Vision, 2015.
International Conference on Electronics and [9] M. Hiromoto, H. Sugano, and R. Miyamoto,
Sustainable Communication Systems (ICESC), “Partially Parallel Architecture for
Coimbatore, India, 2022, pp. 1505-1511, doi: AdaBoost-Based Detection With Haar-Like
10.1109/ICESC54411.2022.9885356. Features”, IEEE Trans. Circuits and Systems for
[3] S. Durgadevi, K. Thirupurasundari, C. Komathi Video Technology, vol. 19, Jan 2009, pp. 41-52.
and S. M. Balaji, "Smart Machine Learning [10] L. -B. Chen, J. -P. Su, M. -C. Chen, W. -J.
System for Blind Assistance," 2020 International Chang, C. -H. Yang and C. -Y. Sie, "An
Conference on Power, Energy, Control and Implementation of an Intelligent Assistance
Transmission Systems (ICPECTS), Chennai, System for Visually Impaired/Blind People," 2019
India, 2020, pp. 1-4, doi: IEEE International Conference on Consumer
10.1109/ICPECTS49113.2020.9337031. Electronics (ICCE), Las Vegas, NV, USA,
[4] X. Hu, A. Song, Z. Wei, and H. Zeng, 2019, pp. 1-2,
"StereoPilot: A Wearable Target Location System doi:10.1109/ICCE.2019.8661943.
for Blind and Visually Impaired Using Spatial [11] N. Ghatwary, A. Abouzeina, A.
Audio Rendering," in IEEE Transactions on
Kantoush, B. Eltawil, M. Ramadan and M.
Neural Systems and Rehabilitation Engineering,
Yasser, "Intelligent Assistance System for
vol. 30, pp. 1621-1630, 2022, doi:
Visually Impaired/Blind People (ISVB),"
10.1109/TNSRE.2022.3182661.
2022 5th International Conference on
[5] Ziad O. Abu-Faraj, Paul Ibrahim, Elie Jabbour,
Communications, Signal Processing, and their
and Anthony Ghaoui, “Design and Development
of a Prototype Rehabilitative Shoes and Spectacles
Applications (ICCSPA), Cairo, Egypt, 2022,
for the Blind”, IEEE Int. Conf. BioMedical pp. 1-7, doi:
Engineering and Informatics, 2012, pp. 795-799. 10.1109/ICCSPA55860.2022.10019201
[6] Giva Andriana Mutiara, Gita Indah Hapsari and [12] Y. -S. Lin, W. -C. Chen and T. P. -C.
Ramanta Rijalul, “Smart Guide Extension for Chen, "Tensor-Centric Processor Architecture
Blind Cane", IEEE Int. Conf. Information and for Applications in Advanced Driver
Communication Technologies, 2016. Assistance Systems," 2021 International
[7] G. Balakrishnan, G. Sainarayanan, R. Nagarajan, Symposium on VLSI Design, Automation and
and Sazali Yaacob, “A Stereo Image Processing Test (VLSI-DAT), Hsinchu, Taiwan, 2021, pp.
System for Visually Impaired”, Int. Journal of
1-3, doi:
Computer, Electrical, Automation, Control, and
10.1109/VLSI-DAT52063.2021.9427310.
Information Engineering, vol.2, No.8, 2008, pp.
2794-2803.
[13] K. Chaccour and G. Badr, "Novel indoor
navigation system for visually impaired and
blind people," 2015 International Conference
on Applied Research in Computer Science
and Engineering (ICAR), Beirut, Lebanon,
2015, pp. 1-5, doi:
10.1109/ARCSE.2015.7338143.
[14] Conference on VLSI Design and 2016
15th International Conference on Embedded
Systems (VLSID), Kolkata, India, 2016, pp.
421-426, doi: 10.1109/VLSID.2016.11.