Professional Documents
Culture Documents
Phase3 Ai Powered Voice Assistant
Phase3 Ai Powered Voice Assistant
AI ASSISTANT
ABTRACT:
In today's digital landscape, the ubiquitous presence of virtual assistants like Siri, Alexa, and
Google Assistant underscores the transformative power of artificial intelligence (AI) in everyday
life. However, while these platforms offer convenience and utility, they often lack the depth of
personalization necessary to fully cater to individual needs and preferences.
INTRODUCTION:
The proliferation of AI-powered virtual assistants has reshaped the way humans interact with
technology, blurring the lines between the digital and physical realms. While these assistants
have undoubtedly simplified tasks and provided access to a wealth of information, their generic
nature often falls short of meeting the diverse and nuanced needs of individual users.
As personalized AI assistants rely on the collection and analysis of sensitive user data, ensuring
privacy and data security is of paramount importance. Robust encryption, anonymization
techniques, and stringent access controls should be implemented to safeguard user information
from unauthorized access or misuse.
SCALABILITY AND EXTENSIBILITY:
HARDWARE REQUIREMENTS:
1. Processing Power: Sufficient CPU or GPU power to handle natural language processing
(NLP) tasks and machine learning algorithms efficiently.
2. Memory: Adequate RAM to store and manipulate large datasets, as well as run multiple
processes concurrently.
3. Storage: Sizable storage space for storing training data, models, and user preferences.
4. Connectivity: Stable internet connection for accessing cloud-based services and data retrieval.
SOFTWARE REQUIREMENTS:
1. Operating System: Compatibility with the operating systems you intend to use, such as
Windows, macOS, Linux, Android, or iOS.
3. Natural Language Processing (NLP) Libraries: Integration with NLP libraries like NLTK,
spaCy, or Hugging Face's Transformers for language understanding and generation.
4. Speech Recognition: Integration with speech recognition APIs or libraries such as Google
Speech-to-Text or Mozilla DeepSpeech for accurately transcribing sp natural language
processing tasks, images for computer vision, or any other type of data your assistant will work
with.
4. Training Data: Annotate and label your data for training. This step is crucial for supervised
learni oken commands.
tc.?
2.Choose a Platform or Framework: Depending on your technical skills and preferences, you
might choose platfor
ms like Dialogflow, Wit.ai, or build from scratch using libraries like TensorFlow or PyTorch.
3. Data Collection: Gather data relevant to your assistant's tasks. This could be text data for
natural processing tasks.
CONCLUSION:
Building a personalized AI assistant represents a significant evolution in human-computer
interaction, offering users a bespoke and tailored experience that enhances productivity,
efficiency, and overall well-being. By prioritizing customization, user-centric design, multimodal
inputs, contextual understanding, privacy, and scalability, personalized AI assistants have the
potential to revolutionize the way individuals interact with technology .
FLOW CHART:
CODE IMPLEMENTATION:
import subprocess
import wolframalpha
import pyttsx3
import tkinter
import json
import random
import operator
import speech_recognition as sr
import datetime
import wikipedia
import webbrowser
import os
import winshell
import pyjokes
import feedparser
import smtplib
import ctypes
import time
import requests
import shutil
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id)
def speak(audio):
engine.say(audio)
engine.runAndWait()
def wishMe():
hour = int(datetime.datetime.now().hour)
else:
speak(assname)
def username():
uname = takeCommand()
speak("Welcome Mister")
speak(uname)
columns = shutil.get_terminal_size().columns
print("#####################".center(columns))
print("#####################".center(columns))
def takeCommand():
r = sr.Recognizer()
print("Listening...")
r.pause_threshold = 1
audio = r.listen(source)
try:
print("Recognizing...")
except Exception as e:
print(e)
return "None"
return query
def sendEmail(to, content):
server.ehlo()
server.starttls()
server.close()
if __name__ == '__main__':
clear()
wishMe()
username()
while True:
query = takeCommand().lower()
# recognition of command
if 'wikipedia' in query:
speak('Searching Wikipedia...')
speak("According to Wikipedia")
print(results)
speak(results)
webbrowser.open("youtube.com")
webbrowser.open("google.com")
webbrowser.open("stackoverflow.com")
# music_dir = "G:\\Song"
music_dir = "C:\\Users\\GAURAV\\Music"
songs = os.listdir(music_dir)
print(songs)
codePath = r"C:\\Users\\GAURAV\\AppData\\Local\\Programs\\Opera\\
launcher.exe"
os.startfile(codePath)
try:
content = takeCommand()
sendEmail(to, content)
except Exception as e:
print(e)
try:
content = takeCommand()
to = input()
sendEmail(to, content)
except Exception as e:
print(e)
assname = query
assname = takeCommand()
speak(assname)
exit()
speak(pyjokes.get_joke())
client = wolframalpha.Client(app_id)
indx = query.lower().split().index('calculate')
answer = next(res.results).text
webbrowser.open(query)
os.startfile(power)
ctypes.windll.user32.SystemParametersInfoW(20,
0,
"Location of wallpaper",
0)
appli = r"C:\\ProgramData\\BlueStacks\\Client\\Bluestacks.exe"
os.startfile(appli)
try:
jsonObj = urlopen('''https://newsapi.org / v1 / articles?source = the-times-
of-india&sortBy = top&apiKey =\\times of India Api key\\''')
data = json.load(jsonObj)
i=1
print(item['description'] + '\n')
i += 1
except Exception as e:
print(str(e))
ctypes.windll.user32.LockWorkStation()
subprocess.call('shutdown / p /f')
speak("for how much time you want to stop jarvis from listening commands")
a = int(takeCommand())
time.sleep(a)
print(a)
location = query
speak(location)
subprocess.call(["shutdown", "/r"])
speak("Hibernating")
subprocess.call("shutdown / h")
subprocess.call(["shutdown", "/l"])
note = takeCommand()
snfm = takeCommand()
file.write(strTime)
file.write(" :- ")
file.write(note)
else:
file.write(note)
speak("Showing Notes")
print(file.read())
speak(file.read(6))
speak("After downloading file please replace this file with the downloaded one")
total_length = int(r.headers.get('content-length'))
expected_size =(total_length /
1024) + 1):
if ch:
Pypdf.write(ch)
# NPPR9-FWDCX-D2C8J-H872K-2YT43
wishMe()
speak(assname)
city_name = takeCommand()
response = requests.get(complete_url)
x = response.json()
if x["code"] != "404":
y = x["main"]
current_temperature = y["temp"]
current_pressure = y["pressure"]
current_humidiy = y["humidity"]
z = x["weather"]
weather_description = z[0]["description"]
else:
message = client.messages \
.create(
body = takeCommand(),
from_='Sender No',
to ='Receiver No'
print(message.sid)
elif "wikipedia" in query:
webbrowser.open("wikipedia.com")
speak(assname)
speak("I'm not sure about, may be you should give me some time")
client = wolframalpha.Client("API_ID")
res = client.query(query)
try:
print (next(res.results).text)
speak (next(res.results).text)
except StopIteration:
# Command go here
Output
Listening…
Recognizing…
#####################
#####################
Listening…
Recognizing…
(‘As the history majors among you here today know all too well, when people in power invent their own
facts and attack those who question them, it can mark the beginning of the end of a free society. That is
not hyperbole. It is what authoritarian regimes throughout history have done. They attempt to control
reality. Not just our laws and our rights and our budgets, but our thoughts and beliefs.’, ‘Hillary Clinton’)
Listening…
Recognizing…
Gaurav is an Indian and Nepalese male name. The name literally means pride.
Listening…
Recognizing…
AI-powered voice assistants have become integral to modern technology, significantly enhancing
user interactions with devices and services. These assistants leverage advanced natural language
processing (NLP) and machine learning (ML) algorithms to understand and respond to user queries,
offering convenience and efficiency in tasks such as information retrieval, smart home control, and
personal organization. As AI technology continues to evolve, voice assistants are becoming more
intuitive, accurate, and capable of handling complex interactions, making them indispensable in various
domains, including healthcare, education, and customer service.
1. Deeper Personalization: Imagine voice assistants that anticipate your needs and preferences.
They might suggest recipes based on what's in your fridge or adjust the thermostat based on your usual
routine.
2. Multi-Modal Interactions: Voice assistants will likely integrate with other smart home devices,
allowing for more complex interactions. For instance, you might ask your assistant to "turn on the movie
lights and start a rom-com on Netflix."
3. Voice Search Revolution: Voice search is expected to become the dominant search method, with voice
assistants acting as our primary gateway to information. This will change how businesses reach
customers, with voice search optimization becoming crucial.
4. Enhanced Accessibility: Voice assistants are a boon for people with visual impairments or physical
limitations. As AI gets better at understanding natural language, voice assistants will become even more
helpful for everyone.
6. Voice Biometrics and Security: Voice recognition can become a secure way to identify yourself and
access personal information. This could streamline tasks like online banking or medical record access.
Challenges Remain:
Despite the exciting possibilities, there are challenges to address. Biases in training data can lead to unfair
or discriminatory behavior by voice assistants. Additionally, security and privacy concerns around data
collection need to be thoughtfully addressed.
Overall, the future of AI-powered voice assistants is bright. They have the potential to make our lives
easier, more efficient, and even more personalized.