Professional Documents
Culture Documents
Desktop Assistance Using Ai
Desktop Assistance Using Ai
On
BACHELOR OF TECHNOGOGY
*Python
*pyttsx3
Python | Text to Speech by using pyttsx3
How cool is it to build your own personal assistants like Alexa or Siri? It’s not very complicated
and can be easily achieved in Python. Personal digital assistants are capturing a lot of attention
lately. Chatbots are common in most commercial websites. With growing advancements in
artificial intelligence, training the machines to tackle day-to-day tasks is the norm.
Voice based personal assistants have gained a lot of popularity in this era of smart homes and
smart devices. These personal assistants can be easily configured to perform many of your
regular tasks by simply giving voice commands. Google has popularized voice-based search that
is a boon for many like senior citizens who are not comfortable using the keypad/keyboard.
This article will walk you through the steps to quickly develop a voice based desktop assistant,
Minchu (meaning Flash) that you can deploy on any device. The prerequisite for developing this
application is knowledge of Python.
For building any voice based assistant you need two main functions. One for listening to your
commands and another to respond to your commands. Along with these two core functions, you
need the customized instructions that you will feed your assistant.
The first step is to install and import all the necessary libraries. Use pip install to install the
libraries before importing them. Following are some of the key libraries used in this program:
Playsound package is used to give voice to the answer. Playsound allows Python to play
MP3 files.
This feasibility study examined the possibility of using an independent voice recognition system
Fg-1.3Block digram
as the input device during a training delivery requirement. The intent was to determine whether
the voice recognition system could be incorporated into a training delivery system designed to
train students how to use the Communications Electronics Operating Instructions manual, a tool
used for communicating over the radio network during military operations. This study showed
how the voice recognition system worked in an integrated voice based delivery system for the
purpose of delivering instruction. An added importance of the study was that the voice system
was an independent speech recognition system. At the time this study was conducted, there did
not exist a reasonably priced speech recognition system that interfaced with both graphics and
authoring software which allowed any student to speak to the system without training the system
to recognize the individual student's voice. This feature increased the usefulness and flexibility of
the system. The methodology for this feasibility study was a development and evaluation model.
This required a market analysis, development of the voice system and instructional course ware,
testing the system using a sample population from the Armor School at Ft. Knox, Kentucky, and
making required alterations. The data collection approach was multifaceted. There were surveys
to be completed by each subject: a student profile survey, a pretest, a posttest, and an opinion
survey about how well the instruction met expectations. Data was also collected concerning how
often the recognition system recognized, did not recognize, or misrecognized the voice of each
subject. The information gathered was analyzed to determine how well the voice recognition
system performs in a training delivery application. The findings of this feasibility study indicated
that an effective voice based training delivery system could be developed by integrating an IBM
clone personal computer with a graphics board and supporting software, signal processing board
and supporting software for audio output and input, and instructional authoring software.
Training was delivered successfully since all students completed the course, 85% performed
better on the posttest than on the pretest, and the mean gain scores more than satisfied the
expected criterion for the training course. The misrecognition factor was 12%. An important
finding of this study is that the misrecognition factor did not affect the students' opinion of how
well the voice system operated or the students' learning gain.
METHODOLOGY/PLANNING OF WORK
The part where I tell you what are the basic requirement for this project. You’ll need Python 3.6.
We’ll be using the pyttsx3 package which is a text-to-speech library for Python. The basic reason
why we use this is because it works offline. Another basic requirement of this project will be
Python’s Speech Recognition library. There are other requirements for the project which are
listed below; we’ll understand them as we go ahead. Inappropriate college description is also
conveyed as all terms and conditions of college are not known to students. The overall system
design consists of following phases:
Desktop assistant name is NOVA and we have developed it under 5000 it will have interface in
which there will be two button start and exit. As soon as we start the application are application
will tell us to wait till it is open and we have to click start to run the application after running the
application are assistant NOVA will ask that “how can I help you?”. Then the user has to give
the voice command to the assistant. If the user gives voice command “Describe yourself”, the
NOVA results with
It provides information regarding the weather, News, it can play music, it can search for topics
on Wikipedia, can setup an alarm, Display the current date
and time.
User can collect information through this application.
It reduces both man power and time. Due to support of NLP user can ask queries in very formal
way. No need ask queries in very strict and specific way. The user should aware of general rules
of English Language. The goal is to provide people a quick and easy way to have their questions
answered.
Source Code : Python|| voice desktop assistant
import pyttsx3
import speech_recognition as sr
import datetime
import wikipedia
import webbrowser
import time
#set engine----------------------------------------------------------------------
----------------------------------
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
rate = engine.getProperty('rate')
engine.setProperty('rate', rate - 20)
#print(voices[0].id)
engine.setProperty('voice', voices[0].id)
#-------functions1(computer speak)-----------------------------------------------
--------------------------------------------------------------
def speak(audio):
engine.say(audio)
engine.runAndWait()
#-----------------------(computer wish me)---------------------------------------
----------------------------------------------------
def wishMe():
hour = int(datetime.datetime.now().hour)
if hour>=0 and hour<=12:
speak('Good moring aru sir!')
elif hour>=12 and hour<=18:
speak('good after noon aru sir!')
else:
speak ('good evening sir!')
speak('i am phoenix ! what can i do for you')
#-----------------------(computer take ur voice)-----------------------------
-------------------------------------------------------
def takeCommand():
# its take micro phone input from the user and returen the string out put-------
--------------------------------------
r = sr.Recognizer()
with sr .Microphone() as source:
print('Listening..')
r.pause_threshold = 1
audio = r.listen(source)
try:
print('recognizing...!!')
query= r.recognize_google(audio, language='en-in')
print(F"User said: {query}\n")
except Exception as e:
print("say again please...")
return "None"
return query
if __name__ == "__main__":
wishMe()#first it will wish u then take ur voice-----------------------------
---------------------------------------
while True:
query = takeCommand().lower()
#logic tasks--
# wikipedia work-----------------------------------------------------------------
-----------------------------------------
if 'wikipedia' in query:
speak('Searching Wikipedia...')
query = query.replace("wikipedia", "")
results = wikipedia.summary(query, sentences=2)
speak("According to Wikipedia")
print(results)
speak(results)
elif "who made you" in query:
speak ("I have been created by, programmer ARU.")
query = query.replace("who made you","")
elif 'open youtube' in query:
webbrowser.open("youtube.com")
elif 'the time' in query:
strTime = datetime.datetime.now().strftime("%H:%M:%S")
speak(f"Sir, the time is, {strTime}")
elif 'which college i study from' in query:
speak(f"sir, U are study in Rajshree Institute of managment of techno
logy")
query = query.replace("which college i study from","")
elif 'rajat' in query:
speak(f"sir, He is your fast friend , and also naughty ")
query= query.replace('rajat',"")
elif "thank you" in query:
speak("your wellcome sir ")
query= query.replace("thank you","")
CONCLUSION
CONCLUSION Through this voice assistant, we have automated various services using a single
line command. It eases most of the tasks of the user like searching the web, retrieving weather
forecast details, vocabulary help and medical related queries. We aim to make this project a
complete server assistant and make it smart enough to act as a replacement for a general server
administration. The future plans include integrating Jarvis with mobile using React Native to
provide a synchronised experience between the two connected devices. Further, in the long run,
Jarvis is planned to feature auto deployment supporting elastic beanstalk, backup files, and all
operations which a general Server Administrator does. The functionality would be seamless
enough to replace the Server Administrator with Jarvis. 3
BIBLIOGRAPHY
We owe a debt of sincere gratitude, and respect to our guide and mentor Ronak Jain, Professor, AITR,
Indore for his sagacious guidance, vigilant supervision and valuable critical appreciation throughout this
project work. We express profound gratitude and heartfelt thanks to Dr. Kamal Kumar Sethi, HOD IT,
AITR Indore for his support, suggestion and inspiration for carrying out this project. We thank you for the
support and guidance received from Dr. S C Sharma, Director, AITR, Indore whenever needed. VI.
REFERENCES [1] “Desktop Assistant from Wikipedia,” https://en.wikipedia.org/wiki/Desktop Assistant,
[2] Richard Krisztian Csaky, “Desktop Assistant and related Research paper Notes with Images,”
https://github.com/ricsinaruto/Seq2seqDesktop Assistants/wiki/Desktop Assistant-and-RelatedResearch-
Paper-Notes-eith-images, [3] Yosua Alvin AdiSoetrisno, “Ticketing Desktop Assistant Service using
Serverless NLP Technology,” http://www.academia.edu/Documents/in/Desktop Assistant, [4] “Potential
benefits of Desktop Assistants,”
https://www.convinceandconvert.com/wpcontent/uploads/2018/01/Critical-Desktop Assistant-Statistics-
2018-2-e1516922252367.jpg, [5] “Desktop Assistant Theory Explained,”
https://www.geeksforgeeks.org/Desktop Assistant-theoryexplained/, [6] [6] “Freya Riki, “Future of
Desktop Assistant in 2019,” https://yourstory.com/mystory/future-of-Desktop Assistant-in-2019-
8wulieg1yx,