Professional Documents
Culture Documents
Bala Report
Bala Report
Submitted by
of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
MAY 2023
VIRTUAL INTELLIGENCE VOICE ASSISTANT SYSTEM
Submitted by
of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
MAY 2023
i
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
SIGNATURE SIGNATURE
ii
DECLARATION
BALASUBRAMANIAM G
Date:
Place: Chennai
It is certified that this project has been prepared and submitted under my guidance.
iii
ACKNOWLEDGEMENT
iv
ABSTRACT
v
TABLE OF CONTENTS
LIST OF TABLES IX
LIST OF FIGURES X
LIST OF ABBREVATIONS XI
1. INTRODUCTION 1
2. LITERATURE SURVEY 7
3. SYSTEM DESIGN 11
3.1.1CONCEPTUAL ARCHITECTURE 11
3.1.2 FEATURES 11
vi
3.1.3BLOCK DIAGRAM FOR PROPOSED 12
SYSTEM
4. REQUIREMENT SPECIFICATION 22
5. SYSTEM IMPLEMENTATION 24
5.1 CODING 25
6. SYSTEM TESTING 39
6.3 MAINTENANCE 42
7. CONCLUSION 43
7.1 CONCLUSION 43
8. REFERENCES 46
viii
LIST OF TABLES
ix
LIST OF FIGURES
x
LIST OF ABBREVATIONS
xi
CHAPTER 1
INTRODUCTION
Virtual reality, augmented reality, voice interface, IoT, and other emerging
technologies are altering people's interactions with the world and redefining digital
experiences. Voice control is an essential leap in human-machine interface made
feasible by advances in artificial intelligence. In this day and age, we can help our
machines to take care of their responsibilities. We can converse with our robots
through remote helpers or train them to think like people by utilizing innovations, for
example, Man-made consciousness, AI, Brain Organizations, etc. Voice has just made
a big comeback. Apple's Siri, Google Colleague, Microsoft's Cortana, and Amazon's
Alexa are instances of individual associates. Because to the widespread use of
cellphones this has been recognised. Voice assistants make use of technology such as
voice recognition and speech synthesis.
Voice assistant capabilities and upgrades are always evolving to deliver improved
performance to users. We created our desktop-based voice assistant using Python
modules and libraries so that our personal voice assistant could function easily and
smoothly on the desktop. The essential thought behind our venture is that the client
makes a solicitation to the voice colleague through the gadget's receiver to finish their
undertaking, and afterward their order is changed into message.The text request is then
routed to processing, which provides a text response as well as work. Together with
fundamental day-to-day functionality, we are attempting to incorporate the concept of
Face detection for security purposes in our voice assistant in order to make it more
flexible and personal. Our application employs the fewest system resources, lowering
the cost of system requirements while also reducing the threat to your system because
it does not directly connect with servers.
In the current scenario, advancement in technologies are such that they can perform
any task with same effectiveness or can say more effectively than us. By making this
1
project, I realized that the concept of AI in every field is decreasing human effort and
saving time. As the voice assistant is using Artificial Intelligence hence the result that
it is providing are highly accurate and efficient. The assistant can help to reduce human
effort and consumes time while performing any task, they removed the concept of
typing completely and behave as another individual to whom we are talking and asking
to perform task. The assistant is no less than a human assistant but we can say that this
is more effective and efficient to perform any task. The libraries and packages used to
make this assistant focuses on the time complexities and reduces time.
We are familiar with many existing voice assistants like Alexa, Siri, Google
Assistant, Cortana which uses concept of language processing, and voice recognition.
They listens the command given by the user as per their requirements and performs
that specific function in a very efficient and effective manner. As these voice assistants
are using Artificial Intelligence hence the result that they are providing are highly
accurate and efficient. These assistants can help to reduce human effort and consumes
time while performing any task, they removed the concept of typing completely and
behave as another individual to whom we are talking and asking to perform task. These
assistants are no less than a human assistant but we can say that they are more effective
and efficient to perform any task. The algorithm used to make these assistant focuses
on the time complexities and reduces time. But for using these assistants one should
have an account (like Google account for Google assistant, Microsoft account for
Cortana) and can use it with internet connection only because these assistants are
going to work with internet connectivity. They are integrated with many devices like,
phones, laptops, and speakers etc.
The objectives of developing a voice assistant can vary depending on the specific
application and purpose of the project. However, some common objectives of
developing a voice assistant may include Improving user experience A primary
objective of developing a voice assistant is to improve the user experience by
providing a natural and intuitive way for users to interact with technology.
Personalization Voice assistants can use machine learning techniques to learn from a
user's behavior, preferences, and previous interactions to provide personalized and
efficient services. Automation Voice assistants can automate tasks, such as scheduling
appointments, playing music, and controlling smart home devices, which can save
users time and effort. Accessibility Voice assistants can be helpful for people with
3
mobility or vision impairments who may have difficulty using traditional interfaces.
Innovation Voice assistants are becoming increasingly popular and are being
integrated into more devices and applications. Developing a voice assistant project can
be a way to explore new use cases and create innovative solutions. Learning
opportunity Developing a voice assistant project can be a great way to learn about
natural language processing, machine learning, and artificial intelligence. It can also
be an opportunity to work with different libraries and APIs. Overall, the objectives of
developing a voice assistant can vary depending on the specific application and
purpose of the project. However, the common goal is to provide a useful and
innovative tool that enhances the user's experience and improves efficiency.
Currently, the project aims to provide the Windows Users with a Virtual Assistant
that would not only aid in their daily routine tasks like searching the web, extracting
weather data, vocabulary help and many others but also help in automation of various
activities. In the long run, we aim to develop a complete server assistant, by
automating the entire server management process - deployment, backups, auto-
scaling, logging, monitoring and make it smart enough to act as a replacement for a 6
general server administrator.As a personal assistant, aide assists the end-user with day-
to-day activities like general human conversation, searching queries in various search
engines like Google, Bing or Yahoo, searching for videos, retrieving images, live
weather conditions, word meanings, searching for medicine details, health
recommendations based on symptoms and reminding the user about the scheduled
events and tasks. The user statements/commands are analysed with the help of
machine learning to give an optimal solution.
4
1.4 SCOPE OF THE PROJECT
The scope of a voice assistant project can vary depending on the specific application
and purpose of the project. However, some common areas of scope for a voice
assistant project may include
1. User interface design: This includes designing a user-friendly interface that allows
users to interact with the voice assistant using natural language commands.
3. Natural language processing: The voice assistant must be able to understand the
meaning of the user's spoken words, which requires the use of natural language
processing techniques.
4. Dialog management: The voice assistant must be able to manage the conversation
flow between the user and the system to provide a seamless and efficient experience.
5. Personalization: A voice assistant can use machine learning techniques to learn from
a user's behavior, preferences, and previous interactions to provide personalized and
efficient services.
6. Integration with external systems: A voice assistant project may require integration
with external systems, such as smart home devices, calendars, and email systems.
7. Text-to-speech: The voice assistant must be able to convert its responses into
audible speech using a text-to-speech engine.
Overall, the scope of a voice assistant project can be quite broad, requiring expertise
in several areas, including speech recognition, natural language processing, and
5
machine learning. The scope may also be influenced by the specific application and
purpose of the project, such as home automation or personal assistance.Presently,
Assistant is being developed as an automation tool and virtual assistant. Among the
Various roles played by voice aide are
There shall be proper Documentation available for making further development easy
and we aim to release our virtual assistant as an Open Source Software where
modifications and contributions by the community are warmly welcomed.
6
CHAPTER 2
LITERATURE SURVEY
7
user commands and returning information that is pertinent to the command [1].
The review led by Dr. Kshama V. Kulhalli analyzed the most well known voice aides,
including Google Right hand, Apple's Siri, and Cortana from Microsoft. This study
reached the resolution that Google Right hand reactions are the most dependable. They
were profoundly great at understanding voice vacillations [5].
8
2.2 FEASIBILITY STUDY
A feasibility study is an important initial step in any project to determine whether
the project is viable and feasible to undertake. Here are some key factors to consider
for a feasibility study for a voice assistant project:
1. Technical feasibility: The first aspect to consider is whether the technology exists
or can be developed to implement the voice assistant project. The project may require
expertise in speech recognition, natural language processing, and machine learning
techniques.
2. Market feasibility: It is important to assess the market demand for a voice assistant
and whether the project will meet the needs and expectations of the target market.
Researching the competition and potential user base can help determine the potential
success of the project.
Overall, a feasibility study is an essential step in determining the viability and potential
success of a voice assistant project. Conducting a thorough feasibility study can help
identify potential challenges, risks, and opportunities associated with the project and
help make informed decisions about whether to proceed with the project.
9
This study showed how the voice recognition system worked in an integrated voice
based delivery system for the purpose of delivering instruction. An added importance
of the study was that the voice system was an independent speech recognition system.
At the time this study was conducted, there did not exist a reasonably priced speech
recognition system that interfaced with both graphics and authoring software which
allowed any student to speak to the system without training the system to recognize
the individual student's voice. This feature increased the usefulness and flexibility of
the system.
10
CHAPTER 3
SYSTEM DESIGN
The Discourse Acknowledgment library has many underlying components that will
enable the partner to understand the client's order, and the response will be
communicated back to the client in voice using message-to-discourse capabilities, in
the suggested idea powerful approach to implementing an individual voice
collaborator. The fundamental calculations will decipher the client's vocal guidance
into text when the collaborator hears it.
3.1.1Conceptual Architecture
2. Riding the electronic on the client's expressed boundaries, giving the ideal result
through sound, and printing the result at the same time on the screen.
11
3.1.3 BLOCK DIAGRAM FOR PROPOSED SYSTEM
A voice assistant system can be developed using various programming languages
and tools depending on the specific platform and features required. Here's a general
overview of the components and architecture of a voice assistant system:
1.Wake word detection: The system listens for a wake word, such as "Hey Siri" or
"Alexa", to activate the voice assistant.
2. Speech recognition: The system converts the user's spoken command into text using
automatic speech recognition (ASR) technology.
3. Natural language processing: The system processes the text to identify the user's
intent and extract relevant information from the command.
4. Action execution: The system performs the requested action, which could involve
sending a message, playing music, setting a reminder, or controlling a smart home
device.
5. Response generation: The system generates a response, either by speaking the
response aloud or displaying it on a screen.
6. Integration with third-party services: The system can integrate with third-party
services to perform more complex actions, such as ordering food or making a
purchase.
The development of a voice assistant system typically involves the use of machine
learning techniques for the speech recognition and natural language processing
components. Additionally, the system needs to be optimized for the specific platform
it will run on, such as a smart speaker or a smartphone, and the user experience should
be carefully designed to ensure that the system is easy to use and understand.
12
3.2 SYSTEM ARCHITECTURE
A Level 0 Data Flow Diagram (DFD) for a voice assistant system shows the overall
flow of data between the system and the user. Here is an example of a Level 0 DFD
for a voice assistant system
13
In this DFD, the user inputs a voice command or request, which is processed by the
voice assistant system. The system then generates an appropriate output, which is
presented to the user. This Level 0 DFD does not show the internal components of the
voice assistant system, such as speech recognition, natural language processing, or
external system integration. It provides a high-level view of the overall flow of data
between the user and the voice assistant system.
User Input: This is the starting point of the process, where the user speaks a command
or request to the voice assistant system. The user input may come from a variety of
sources, such as a smartphone, smart speaker, or other voice-enabled device.
Voice Assistant: The voice assistant component processes the user input using various
technologies, such as speech recognition, natural language processing, and machine
learning. The voice assistant system interprets the user's intent and generates an
appropriate response or action.
Output: The output component generates a response or action based on the user's input
and the system's interpretation of the request. The output may take various forms, such
as a spoken response, a text message, or a command to an external system.
14
3.2.2 LEVEL-1 DFD
In this Level 1 DFD, the system is broken down into more detailed components and
processes that work together to provide the voice assistant functionality. Here is a brief
explanation of each component
User Interface: This is the interface between the user and the voice assistant system,
where the user interacts with the system by speaking voice commands.
Speech Input: The speech input component captures the user's voice commands and
converts them into digital audio signals that can be processed by the system.
Speech Recognition: The speech recognition component analyzes the audio signals to
recognize the user's words and convert them into text format.
15
Text-to-Speech: The text-to-speech component generates an audible response to the
user's request, which is played back through an audio output device.
Audio Output: This is the final output stage, where the system plays back the generated
response to the user through an audio output device, such as a speaker or headphones.
Overall, the Level 1 DFD provides a more detailed view of the voice assistant system
and its components than the Level 0 DFD. It shows how the different components of
the system work together to process user requests and generate appropriate responses.
In this Level 2 DFD, the system is further broken down into more detailed processes
and components. Here's a brief explanation of each component
User Interface: This is the same as in the Level 1 DFD, where the user interacts with
the voice assistant system by speaking voice commands.
16
Speech Input: This is the same as in the Level 1 DFD, where the system captures the
user's voice commands and converts them into digital audio signals.
Speech Recognition: This is the same as in the Level 1 DFD, where the system
analyzes the audio signals to recognize the user's words and convert them into text
format.
Natural Language Processing: This is the same as in the Level 1 DFD, where the
system analyzes the transcribed text to understand the user's intent and extract relevant
information.
Weather Service, News Service, Music Service, Reminders Service: These are
examples of external services that the voice assistant system may integrate with to
provide the user with the requested information or action.
Response Generation: This component generates an appropriate response to the user's
request based on the user's intent and the information retrieved from external services,
if any.
Text-to-Speech Generation: This component generates an audible response to the
user's request, which is played back through an audio output device.
Audio Output: This is the same as in the Level 1 DFD, where the system plays back
the generated response to the user through an audio output device.
External Services: This component represents any external services that the voice
assistant system may integrate with, such as APIs for weather, news, or music services.
Overall, the Level 2 DFD provides an even more detailed view of the voice assistant
system and its components than the Level 1 DFD. It shows how the different
components of the system work together to process user requests and generate
appropriate responses, and how the system may integrate with external services to
provide additional functionality.
17
3.3 MODULE DESIGN
18
OS
The Python OS package offers methods for relating to the OS in some way. OS is
included in the basic utility tools for Python. This program offers a strategy for
utilizing working framework subordinate highlights.
PyAudio
is an assortment of Python interfaces for Port Sound, a cross-stage C++ library that
speaks with sound drivers.
Text-to-Speech feature
The expression "text-to-discourse" (TTS) depicts an element that permits machines to
perceptibly understand text. A TTS Motor changes composed message into a
phonemic portrayal, which is then changed into waves that can be sent as sound. TTS
frameworks with different dialects, accents and expert vocabularies are available
through outsider distributers.
Text to speech conversion
Discourse acknowledgment programming is utilized to make an interpretation of
spoken information into composed yield. It unravels the discourse and converts it to
message in a justifiable way.
Setting Extraction
Setting extraction is the course of independently separating coordinated data from
unstructured and additionally semi-organized machine-intelligible texts. This activity
ordinarily includes utilizing regular language handling to dissect texts written in
human dialects.Ongoing advancements in multimodal record handling, like robotized
comment and content extraction from pictures, sound, and video, could be seen as the
results of setting extraction tests.
Written Output
It deciphers the voice order, executes the activity, and afterward shows the voice
request as composed yield in the terminal.
19
3.4 IMPLEMENTATION METHODOLOGY
Using sapi5 and pyttsx3, we enable our application to use system speech. pyttsx3 is
a text-to-discourse interpretation device in Python. It is viable with Python 2 and 3
and capabilities disconnected, dissimilar to contending devices. The Discourse
Application Programming Point of interaction, otherwise called SAPI, is a
Programming interface made by Microsoft to empower the utilization of voice
amalgamation and acknowledgment inside operating system applications. Then, the
program's all's powers are determined in the primary capability. The associate
solicitations input from the client and keeps on looking for guidelines. How much time
for hearing can be changed in light of the necessities of the person. If the helper is
unable to understand the instruction properly, it will repeatedly request the same
instruction from the customer. The client can pick either a manly or ladylike voice for
this partner, contingent upon their inclinations. The right hand's latest release
incorporates capabilities for really looking at climate projections, sending and getting
messages, looking through Wikipedia, opening applications, actually looking at the
time, taking notes, and showing notes. Google, YouTube, and other applications can
be opened and shut
Voice assistant implementation typically involves a combination of software
development methodologies, such as agile or iterative methodologies, and machine
learning techniques. Here is a general methodology that can be used for implementing
a voice assistant system
2. Data collection and preprocessing: Collect and preprocess data for machine
learning, such as speech data and text data, to train and test the natural language
processing and speech recognition models.
20
3. Model selection and training: Select appropriate machine learning models, such as
deep neural networks, and train them using the preprocessed data.
4. Integration with external APIs: Integrate the voice assistant system with external
APIs and services to enhance its functionality and provide access to additional data
sources.
5. Speech recognition and natural language processing: Develop algorithms and
models for speech recognition and natural language processing, which convert
spoken or written input into structured data that can be used to generate responses.
The above methodology can be adapted and modified based on the specific
requirements and constraints of the voice assistant system being developed. It is
important to involve users in the development process and gather feedback throughout
the development lifecycle to ensure the system meets their needs and expectations.
21
CHAPTER 4
REQUIREMENT SPECIFICATION
Internet service
22
4.2 SOFTWARE REQUIREMENT
Software requirements refer to the specific functional and non-functional needs
of a software system or application. In a project report, the section on software
requirements should provide a clear and comprehensive description of the
requirements that must be met by the software being developed.
Windows 10 or higher
Pycharm
Python
Pyautogui
23
CHAPTER 5
SYSTEM IMPLEMENTATION
The following will describe the implementation of the proposed voice cloning
system, including the data collection, model training, and evaluation processes.
Data Collection: The first step in building a voice cloning model is to collect data
from the target speaker. The data collection process involves recording a large amount
of high-quality audio samples of the person's voice. The samples must be diverse and
cover various speaking styles, emotions, and accents. The data must also be recorded
in a quiet environment with minimal background noise. The collected data is then
cleaned and preprocessed to prepare it for the next step.
Model Training: The next step is to use the collected data to train a deep learning
model. In this project, we used a neural network-based approach, which is capable of
learning the complex patterns and nuances of the target speaker's voice. The model
consists of an encoder and a decoder, which work together to learn the speaker's voice
characteristics and generate synthetic speech. During the training process, the model
is optimized to minimize the difference between the generated speech and the original
recordings.
24
Deployment: Once the model has been trained and evaluated, it can be deployed to
produce synthetic speech. The model can be integrated into various applications, such
as virtual assistants, text-to-speech engines, and personalized voice assistants. The
deployment process involves optimizing the model for realtime performance and
ensuring its compatibility with the target platform.
5.1 CODING
import pyttsx3
import datetime
import speech_recognition as sr
import wikipedia
import webbrowser
import os
import psutil
import pywhatkit
import pyautogui
import smtplib
import pyjokes
import wolframalpha
import requests
from bs4 import BeautifulSoup
engine=pyttsx3.init('sapi5')
voices=engine.getProperty('voices')
engine.setProperty('voices',voices[0].id)
def speak(audio):
engine.say(audio)
engine.runAndWait()
def wishme():
hour=int(datetime.datetime.now().hour)
25
if hour>=0 and hour<12:
speak("good morning sir")
elif hour>=12 and hour<18:
speak("good afternoon sir")
else:
speak("good evening sir")
26
server.close()
if __name__=="__main__":
wishme()
while True:
query = takecommand().lower()
if 'wikipedia' in query:
speak("searching in wikipedia")
query=query.replace("wikipedia","")
results=wikipedia.summary(query,sentences=2)
speak("according to wikipedia")
speak(results)
print(results)
27
elif 'open amazon' in query:
speak("opening sir")
webbrowser.open("amazon.com")
webbrowser.open("gmail.com")
elif 'play music' in query:
speak("playing music sir")
musicdir="A:\\MUSIC"
songs=os.listdir(musicdir)
print(songs)
os.startfile(os.path.join(musicdir,songs[0]))
28
songs=os.listdir(musicdir)
print(songs)
os.startfile(os.path.join(musicdir,songs[19]))
29
elif 'open fps' in query:
codepath2="C:\ProgramData\Microsoft\Windows\Start
Menu\Programs\Fraps\Fraps.lnk"
os.startfile(codepath2)
30
pywhatkit.playonyt(result)
speak("playing video sir")
31
speak("increasing volume sir")
pyautogui.press("volumeup")
32
speak(" I am done sir, the screenshot is saved in our main folder")
query = query.replace("open","")
query = query.replace("app","")
pyautogui.press("super")
pyautogui.typewrite(query)
pyautogui.sleep(1)
pyautogui.press("enter")
34
speak("what should I say?")
content = takecommand()
to = "balajisarask@gmail.com"
sendemail(to, content)
speak(" sir, Email has been sent sucessfully")
except Exception as e:
speak("sorry email cannot be sent")
print(e)
35
5.2.2 SEARCHING YOUTUBE
36
5.2.4 SEARCHING AMAZON
37
5.2.6 PERFORMS ARITHMETIC CALCULATION
38
CHAPTER 6
SYSTEM TESTING
39
6.2 SOFTWARE TESTING
Software testing is an essential part of the software development life cycle (SDLC).
It involves evaluating the functionality and performance of a software application to
identify any defects or bugs that may hinder its smooth operation. The testing process
helps in ensuring that the software meets the requirements and specifications outlined
in the project plan.
40
6.2.2 Integration Testing
Integration testing is a type of software testing that verifies that different
components of a system work together correctly. This type of testing is typically
conducted after unit testing and before system testing. The goal of integration testing
is to identify any issues or defects that arise from the interaction between different
components of the system.
41
6.3 MAINTENANCE
Maintaining a voice assistant system is critical for ensuring its ongoing performance
and reliability. Here are some key aspects of voice assistant maintenance
2. Updates and upgrades: Keeping the system up-to-date with the latest software
updates and upgrades is critical for maintaining its performance and security. This
includes updating the voice recognition software, operating system, and any third-
party components.
3. Data management: Managing the data used by the voice assistant system is
important for maintaining its accuracy and efficiency. This includes regularly
cleaning up old or outdated data, and ensuring that the system is properly trained
on new data.
4. Testing: Regular testing of the voice assistant system is important to ensure that it
is functioning properly and to identify any issues or errors that may arise. This
includes testing the system's response time, accuracy, and reliability.
5. User feedback: Gathering feedback from users is important for identifying areas
where the system can be improved. This can include feedback on the system's
accuracy, speed, and overall user experience.
42
CHAPTER 7
CONCLUSION
7.1 CONCLUSION
The personal desktop-based voice assistant made with open-source software visual
studio code as an implementation tool has been addressed in terms of its purposes,
methodology, and implementation specifics. All generations, as well as those with
certain infirmities or specific instances, will benefit from this project. The desktop
voice assistant will be simple to use and will lessen the need for physical labour to do
a variety of activities.
Moreover, this colleague can play out a ton of things with simply a voice order,
including sending messages to the client's cell phone, robotizing YouTube, and getting
information from Google and Wikipedia. The ongoing voice partner framework's
capacities are restricted to working on the web and on work areas (needed)The voice
collaborator framework is secluded in plan, making it conceivable to add new abilities
without influencing existing framework usefulness.
Usability and performance testing are critical steps in the development process to
ensure that the system is functioning as intended and providing a satisfactory user
experience. Ongoing maintenance is also essential for ensuring the system's continued
performance and reliability. As voice assistant technology continues to evolve, there
43
are endless possibilities for its use in various industries and applications. It will be
exciting to see how this technology continues to develop and enhance our lives in the
years to come.
• Make voice aide to learn more on its own and develop a new skill in it.
1. Natural language processing: While current voice assistant systems have made
significant strides in natural language processing, there is still room for
improvement. Future enhancements could include more advanced natural language
processing capabilities, such as better understanding of context and tone, and the
ability to process more complex commands.
44
4. Integration with other technologies: Voice assistant systems could potentially be
integrated with other emerging technologies, such as augmented reality or virtual
reality, to create more immersive and interactive experiences.
5. Improved security and privacy: With the increasing use of voice assistants for
sensitive tasks such as online banking and personal communication, there will be
a need for even stronger security and privacy protections to prevent unauthorized
access to user data.
Overall, the future of voice assistant technology is exciting and full of possibilities.
As the technology continues to advance, we can expect to see even more innovative
and intuitive voice assistant systems that enhance our daily lives in meaningful ways.
45
CHAPTER 8
REFERENCES
1. Harshit Agrawal, Nivedita Singh, Gaurav Kumar, Dr. Diwakar Yagyasen, Mr.
Surya Vikram Singh. "Voice Assistant Using Python" An International Open
Access-revied, Refereed Journal.Unique Paper ID: 152099, Publication Volume &
Issue: Volume 8, Issue 2, Page(s): 419-423.
4. Tulshan, Amrita & Dhage, Sudhir. (2019). “Survey on Virtual Assistant: Google
Assistant, Siri, Cortana, Alexa”, 4th International Symposium SIRS 2018,
Bangalore, India, September 19–22, 2018, Revised Selected Papers. 10.1007/978-
981-13- 5758-9_17.
7. Sangpal, R., Gawand, T., Vaykar, S., and Madhavi, N. (2019). “Jarvis: An inter-
pretation of AIML with integration of gtts and python.” 2019 2nd International Con-
ference on Intelligent Computing, Instrumentation and Control Technologies (ICI-
CICT), Vol. 1. 486– 489.
8. Steen, J. and Wilroth, M. (2021). “Adaptive voice control system using ai.
11. Vora, J., Yadav, D., Jain, R., and Gupta, J. (2021). “Jarvis: A pc voice assistant.
12. asirian et al. (2017) Malodia et al. (2021) Vora et al. (2021) Tibola and
46
Tarouco(2013) Sangpal et al. (2019) RAJA (2020) Beirl et al. (2019) Terzopoulos
and Satratzemi
13. (2019) Alotto et al. (2020) Steen and Wilroth (2021) Canbek and Mutlu (2016)
15. Sutar Shekhar, P. Sameer, Kamad Neha, Prof. Devkate Laxman, “An Intelligent
Voice Assistant Using Android Platform”, IJARCSMS, ISSN: 232-7782, 2017.
16. Rishabh Shah, Siddhant Lahoti, Prof. Lavanya. K, “An Intelligent Chatbot using
Natural Language Processing”, International Journal of Engineering Research ,
Vol.6 , pp.281-286, 2017.
47