Voice Recognition - 103626

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

VOICE RECOGNITION SYSTEM

USING PYTHON
A Project work submitted in partial fulfillment of the requirements for
the degree of
Bachelor of Computer Applications
To the
Periyar University, Salem-11
By
YUGESH M
[C21UG155CAP054]

PG & RESEARCH DEPARTMENT OF COMPUTER


SCIENCE
DON BOSCO COLLEGE
(Affiliated to Periyar University, Salem-11)
DHARMAPURI - 636 809

MAR/APR – 2024
CERTIFICATE
This is to certify to be benefited project work on“VOICE RECOGNITION”

Submitted by YUGESH M (Reg.NO:C21UG155CAP054), for the partial fulfilment

degree Bachelor of Computer Applications course as per the syllabus of Periyar

University, Salem during the year 2024.This project, report represents an independent

work of the candidate and as not formed part of award of any degree or diploma

Place: Dharmapuri

Date:

Head of the Department Guide Signature

Date of viva-voice exam……………………


DECLARATION

I hereby declare that the project entitled “VOICE RECOGNITION SYSTEM


USING PYTHON” submitted to the Periyar University, Salem in partial fulfillment of the
requirements for the award of the degree of Master of Science in Computer Science is a
record of original work done by me under the supervision and guidance of. Mr. A.
PRAKASH, M.C.A., M.Phil., Assistant Professor PG & Research Department of
Computer Science, Don Bosco College, Dharmapuri and that it has not previously formed
the basis of the award of any Degree, Diploma, Associate ship, Fellowship or any other
similar title of any University orInstitution.

Place : Dharmapuri Signature of candidate


Date : YUGESH M
ACKNOWLEDGEMENT

I thank the almighty God for the blessings that have been showered upon me to
complete the project successfully. I express my heartfelt thanks to my parents who have
encouraged me in all ways to do my project.

I wish to express my thanks to Rev. Fr. Dr. Rabert Ramesh Babu SDB, Secretary and
Rector, Don Bosco College, Dharmapuri, for his constant encouragement.

I render my special thanks to Rev. Fr. Dr. J. Angelo Joseph SDB, Principal, and
Rev.Fr. Dr. S. Bharathi Bernadsha SDB, Vice-Principal, from Don Bosco College,
Dharmapuri, for their support and constant encouragement.

I am highly thankful to Mr. S. Vadamalai M.C.A., M.Phil., B.Ed., Head, UG&PG


Research Department of Computer Science, Don Bosco College, Dharmapuri, for his
valuable guidance, encouragement, and keen interest in completion of my project work.

I also take the opportunity to express my gratitude to my Guide Mr. A. PRAKASH,


M.C.A., M.Phil., Assistant Professor, UG & PD Research Department of Computer Science,
Don Bosco College, Dharmapuri, for her outstanding computing workmanship evinced in
executing my project work.

I also take the opportunity to express my gratitude to my Class In charge Mr. S.


Michael Bosco Duraisamy, M.C.A., M.Phil., Ph.D. Assistant Professor, UG & PG Research
Department of Computer Science, Don Bosco College, Dharmapuri, for her support evinced
in executing my project work.

I sincerely thank my department staff members of Don Bosco College, Dharmapuri,


for their guidance evinced in executing my project.

[YUGESH M]
ABSTRACT

VOICE ASSISTANT

A voice assistant is an artificial intelligence-driven software application that uses


natural language processing (NLP) and speech recognition technology to understand
and respond to spoken commands or queries from users. These assistants are
designed to perform various tasks, such as providing information, controlling smart
home devices, setting reminders, playing music, sending messages, making phone
calls, and much more. Popular voice assistants include Amazon's Alexa, Apple's Siri,
Google Assistant, and Microsoft's Cortana. They are integrated into smartphones,
smart speakers, and other internet-connected devices, offering users hands-free
interaction and convenience in performing everyday tasks.
CONTENTS

Chapter No Title of the Content Page No

1 INTRODUCTION

1.1 ORGANIZATION PROFILE


1.2 SYSTEM SPECIFICATION
1.2.1 HARDWARE CONFIGURATION
1.2.2 SOFTWARE CONFIGRATION

2 SYSTEM STUDY

2.1 EXISTING SYSTEM


2.1.1 DESCRIPTION
2.1.2 DRAWBACKS
2.2 PROPOSED SYSTEM
2.2.1 DESCRIPTION
2.2.2 FEATURES

3 SYSTEM DESIGN AND DEVELOPMENT

3.1 FILE DESIGN


3.2 INPUT DESIGN
3.3 OUTPUT DESIGN
3.4 CODE DESIGN
3.5 DATABASE DESIGN
3.6 SYSTEM DEVELOPMENT
3.6.1 DISCRIPTION OF MODULES
4 TESTING AND IMPLEMENTATION

5 CONCULUSION

6 BIBLIOGRAPHY

7 APPENDENCIES
A. DATA FLOW DIAGRAM
B. TABLE STRUCTURE
C. SAMPLE CODING
D. SAMPLE INPUT
E. SAMPLE OUTPUT
CHAPTER I
INTRODUCTION
Introduction:

The Voice assistants are AI-powered digital assistants that respond to voice
commands and perform various tasks or provide information based on those commands. They
utilize natural language processing (NLP) and speech recognition technologies to understand
and interpret human speech.

These assistants can be found in various devices, including smartphones, smart speakers,
smart TVs, and even in cars. They are designed to simplify tasks and enhance user experience
by providing hands-free interaction with devices and services.

Popular voice assistants include Amazon's Alexa, Apple's Siri, Google Assistant, and
Microsoft's Cortana. These assistants can perform a wide range of functions, such as setting
reminders, answering questions, playing music, controlling smart home devices, sending
messages, and providing personalized recommendations.

As technology continues to advance, voice assistants are becoming more sophisticated and
capable of understanding complex commands and engaging in more natural conversations
with users. They are increasingly integrated into our daily lives, offering convenience and
efficiency in various aspects of our interactions with technology.
1.1 ORGANIZATION PROFILE
1.2. SYSTEM SPECIFICATION

1.2.1. HARDWARE CONFIGURATION


 Processor : Intel Core i3
 RAM : 16 GB
 Hard disk capacity : SSD 1 TB
 System type : 64-bit operating system

1.2.2. SOFTWARE SPECIFICATION


 Operating System : Windows 10 Pro
 Front-end : PYTHON AND PY -SCRIPT
CHAPTER II
SYSTEM STUDY
2.0. SYSTEM STUDY
2.1. Existing System

2.1.1. Drawbacks

Misunderstandings: They may misinterpret commands or questions, particularly in noisy


environments or with accents.
Limited Context Understanding: Voice assistants struggle with understanding context,
leading to misinterpretations of follow-up questions or commands.
Ambiguity Handling: Complex or ambiguous requests can confuse voice assistants, leading
to inaccurate responses or actions.
Privacy Concerns: Since they need to listen continuously for activation, there are concerns
about privacy and potential misuse of recorded conversations.
Data Security Risks: Recorded conversations and personal data could be vulnerable to
hacking or unauthorized access, raising security risks.
Dependency on Internet Connection: Most voice assistants require an internet connection
to function, limiting their usability in offline situations.
Lack of Emotional Intelligence: Voice assistants lack emotional intelligence, making
interactions feel impersonal and less empathetic.
Limited Language Support: They may not support all languages or dialects, limiting
accessibility for non-English speakers.
Inability to Handle Complex Tasks: While great for simple tasks, voice assistants struggle
with more complex tasks that require nuanced decision-making.
Accessibility Issues: People with speech impediments or disabilities may face difficulties
using voice assistants effectively.
Difficulty with Names and Pronunciations: Voice assistants may have trouble pronouncing
or understanding uncommon names or words.
Inability to Differentiate Voices: They may not be able to distinguish between different
users, leading to mixed-up preferences or information.
Hardware Limitations: The quality of microphones and speakers in devices can affect the
accuracy and reliability of voice recognition.
Intrusiveness: Some users may find constant listening and the need for activation phrases
intrusive or uncomfortable.
Integration Limitations: Integration with third-party apps and services may be limited,
restricting the range of tasks they can perform.
Potential for Bias: Voice recognition algorithms may exhibit bias, leading to disparities in
accuracy and performance across demographics.
Maintenance Challenges: Voice assistants require regular updates and maintenance to
improve performance and security, which can be inconvenient for users.
2.2. Proposed System:

2.2.1. Description

A voice assistant is a type of digital assistant that uses speech recognition, natural
language processing, and artificial intelligence to interpret spoken commands or questions and
perform tasks or provide information accordingly. These assistants are typically integrated into
devices such as smartphones, smart speakers, and other smart home devices, allowing users to interact
with technology through voice commands rather than traditional input methods like typing or tapping.

2.2.2 Features

 Accessibility

 Task Automation

 Natural Language Processing

 Personalization

 Integration

 Continuous Improvemen

 Privacy and Security

 Accuracy and Reliability


CHAPTER III
SYSTEM DESIGN AND DEVELOPMENT
3.1 FILE DESIGN

3.2 INPUT DESIGN


CODE DESIGN

3.6 SYSTEM DEVELOPMENT

System development in Python typically involves creating software systems or


applications using the Python programming language. Here's a general outline of the process
Define Requirements: Understand the purpose and goals of the system, Identify user
requirements and expectations. Design: Create a system architecture and design based on the
requirements, Decide on the overall structure of the system, Break down the system into
modules or components, Design the user interface if applicable, Setup Development
Environment: Install Python and any necessary libraries or frameworks, Choose an integrated
development environment (IDE) or a text editor. Coding: Implement the system logic based
on the design, Write Python code for each module or component. Follow coding best
practices to ensure maintainability. Testing: Develop and execute test cases to verify the
functionality of the system, Perform unit testing for individual components, Conduct
integration testing to ensure different modules work together, Address and fix any bugs or
issues discovered during testing. Documentation: Create documentation for the code,
including comments, docstrings, and any necessary user or developer documentation.
Document the system architecture and design choices. Version Control: Use a version control
system (e.g., Git) to track changes in the code base. Collaborate with a team if applicable.
Deployment: Package the application for deployment. Choose deployment options, whether
it's on-premises or on a cloud platform. Set up any necessary databases, servers, or
infrastructure. Maintenance and Updates: Monitor and maintain the system after deployment.
Address any issues that arise in a timely manner, Plan for and implement updates or new
features as needed. Security Considerations: Implement security measures to protect the
system from vulnerabilities. Regularly update dependencies and libraries to patch security
issues. Remember that Python supports a variety of frameworks and libraries that can help in
different aspects of system development. Some popular ones include Django and Flask for
web development, Tkinter for GUI applications, and NumPy or Pandas for data-related tasks.
The exact steps and tools used may vary based on the specific requirements and nature of the
system being developed.

3.6.1 DISCRIPTION OF MODULES

Modules List:

Speech Recognition:

o Purpose: This module enables your voice assistant to recognize spoken language and
convert it into text.
o Functionality: It processes audio input (speech) and provides the corresponding
textual representation.
o Example Use: Used in voice assistants like Alexa, Siri, and Google Assistant.
o Installation: You can install it using the following command:

pip install Speech Recognition


pyttsx3:

o Purpose: This module facilitates text-to-speech conversion.


o Functionality: It generates speech from text, allowing your assistant to communicate
verbally.
o Example Use: When your assistant needs to respond audibly to user queries.
o Installation: pip install pyttsx3

WolframAlpha:

o Purpose: Used for computing expert-level answers using Wolfram’s algorithms and
AI technology.
o Functionality: Provides detailed information and answers based on user queries.
o Installation: pip install wolframalpha

Tkinter:

o Purpose: A built-in module for creating graphical user interfaces (GUIs).


o Functionality: Allows you to design interactive interfaces for your voice assistant.
o Example Use: Building buttons, input fields, and other UI elements.
o Note: Tkinter comes pre-installed with Python.

Wikipedia:

o Purpose: Fetches information from Wikipedia or performs Wikipedia searches.


o Functionality: Useful for providing context or answering factual questions.
o Installation: pip install wikipedia

Web Browser:

o Purpose: Enables web searches.


o Functionality: Opens a web browser and performs searches based on user requests.
o Note: This module is built-in with Python.
CHAPTER IV

TESTING AND IMPLEMENTATION


TESTINGAND IMPLEMENTATION

Testing and implementing a hospital appointment system involve several steps to


ensure the functionality, reliability, and security of the system. Below are key phases for
testing and implementation:

Testing:

Unit Testing:

Test individual components (modules, functions, classes) of the appointment


system. Verify that each unit performs its intended functionality.

Integration Testing:

Test the interaction between different components to ensure they work


together as expected. Verify that data flows correctly between modules.

Functional Testing:

Verify that the system meets the specified functional requirements. Test
different use cases, including scheduling appointments, canceling appointments, and
viewing appointment details.

User Interface (UI) Testing:

Ensure that the user interface is intuitive and user-friendly. Verify that all UI elements
are responsive and functional.

Performance Testing:

Test the system's performance under various conditions (e.g., high load).

Check response times for critical operations.

Security Testing:

Assess the system's security measures to protect patient data. Test for
vulnerabilities and implement measures to secure the system.
Usability Testing:

Conduct usability tests to ensure the system is easy to use for both staff and
patients. Gather feedback on the user experience.

Compatibility Testing:

Ensure the system works correctly on different browsers and devices. Verify
compatibility with various operating systems.

Regression Testing:

Re-run previous tests after making updates to ensure existing functionality is


not affected.

Implementation:

Backend Development:

Dudu: Dudu is an open-source personal assistant that you can customize and
extend. It supports features like text-to-speech, automation, and offline functionality
Frontend Development:

Develop the user interface for staff and patient interactions. Ensure a
responsive design that works well on various devices.

Integration:

Integrate the frontend and backend components. Verify that data is correctly
transferred between the user interface.

Security Implementation:

Implement security measures, including encryption of sensitive data and


proper user authentication. Set up access controls based on user roles.

Deployment:

Deploy the system to a staging environment for final testing. Once satisfied,
deploy to the production environment.
Monitoring and Support:

Set up monitoring tools to track system performance. Provide ongoing support


and address any issues that arise post-deployment.

Feedback and Iteration:

Gather feedback from users and stakeholders. Iterate on the system based on
feedback and continuously improve it.

Remember to involve end-users and stakeholders throughout the testing and


implementation phases to ensure that the system meets their needs and expectations.
Regular communication and collaboration are crucial for the success of voice
recognition
CHAPTER V

CONCLUTION

CONCLUTION

The conclusion, a well-designed hospital appointment system is a crucial tool


for modern healthcare facilities. It not only improves the efficiency of administrative
processes but also contributes to a better overall patient experience and more effective use of
resources. Implementing such a system can positively impact both the healthcare provider
and the patients, fostering a more streamlined and patient-centric healthcare environment.
CHAPTER VI

BIBLIOGRAPHY
6.0. BIBLIOGRAPHY

 The python is a printed edition of the official python language reference


manual by Guido van Rossum.
 A concise desktop reference for Python 3/5 and 2.7 (with mention of 3.6
features).
 Covers the language itself, built-in types and functions, the standard library,
and crucial third-party extensions such as Numeric, Tkinter, twisted.internet,
Cheetah.
 A concise reference book, awarded as one of the best regular expression books
in the year 2020 and 2021, is written in a very friendly manner to teach and
practice regular expression easily and efficiently.

WEBSITE

1. www.geeksforgeeks.org

2. www.w3schools.com

3, www.learnpython.org
CHAPTER VII

APPENDENCIES
APPENDENCIES

A. DATA FLOW DIAGRAM

DFD INTRODUCTION

DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a
process is represented by DFD. It also gives insight into the inputs and outputs of each entity
and the process itself. DFD does not have control flow and no loops or decision rules are
present. Specific operations depending on the type of data can be explained by a flowchart.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-
analysis modeling tools. Data Flow diagrams are extremely popular because they help us to
visualize the major steps and data involved in software- system processes.

The Data Flow Diagram has four components:

1. process
Input to output transformation in a system takes place because of process
function. The symbols of a process are rectangular with rounded corners, oval,
rectangle, or a circle. The process is named a short sentence, in one word or a
phrase to express its essence.

2. Data Flow

Data flow describes the information transferring between different parts of


the systems. The arrow symbol is the symbol of data flow. A relatable name
should be given to the flow to determine the information which is being moved.
Data flow also represents material along with information that is being moved.
Material shifts are modeled in systems that are not merely informative. A given
flow should only transfer a single type of information. The direction of flow is
represented by the arrow which can also be bi-directional.
Warehouse

The data is stored in the warehouse for later use. Two horizontal lines
represent the symbol of the store. The warehouse is simply not restricted to being
a data file, rather it can be anything like a folder with documents, an optical disc, a
filing cabinet. The data warehouse can be viewed independent of its
implementation. When the data flows from the warehouse it is considered as data
reading and when data flows to the warehouse it is called data entry or data
updating.

3. Terminator

The Terminator is an external entity that stands outside of the system and
communicates with the system. It can be, for example, organizations like banks,
groups of people like customers or different departments of the same organization,
which is not a part of the model system and is an external entity. Modeled systems
also communicate with terminators.

Rules for creating DFD

4. The name of the entity should be easy and understandable without any extra
assistance(like comments).

5. The processes should be numbered or put in an ordered list to be referred


easily.

6. The DFD should maintain consistency across all the DFD levels.

7. A single DFD can have maximum processes up to nine and minimum three
processes.
Levels of DFD
Uses hierarchy to maintain transparency thus multilevel DFD’s can be created.
Levels of DFD are as follows:
7.6 0-level DFD

7.7 1-level DFD

7.8 2-level DFD


Advantages of DFD

8. It helps us to understand the functioning and the limits of a system.

9. It is a graphical representation which is very easy to understand as it helps


visualize contents.

10. Data Flow Diagram represents detailed and well explained diagram of
system components.

11. It is used as a part of a system documentation file.


12. Both technical and nontechnical people can understand Data Flow
Diagrams because they are quite easy to understand.

Disadvantages of DFD
13. At times DFD can confuse the programmers regarding the system.
14. Data Flow Diagram takes a long time to be generated, and many times due
to this reasons analyst are denied permission to work on it.
DATA FLOW DIAGRAM
C. SAMPLE CODING

import speech_recognition as sr

import pyttsx3

import datetime

import wikipedia

import webbrowser

import os

import time

import subprocess

from ecapture import ecapture as ec

import wolframalpha

import json

import requests

import smtplib

import config

print('Loading your AI personal assistant - dudu')

EMAIL_DIC = {

'my self': 'yugeshbca001@gmail.com',

'my official email': 'yugeshbca001@gmail.com',

'my second email': 'yugeshbca001@gmail.com',

'my official mail': 'yugeshbca001@gmail.com',


'my second mail': 'yugeshbca001@gmail.com'

engine=pyttsx3.init('sapi5')

voices=engine.getProperty('voices')

engine.setProperty('voice','voices[0].id')

def speak(text):

engine.say(text)

engine.runAndWait()

def wishMe():

hour=datetime.datetime.now().hour

if hour>=0 and hour<12:

speak("Hello,Good Morning")

print("Hello,Good Morning")

elif hour>=12 and hour<18:

speak("Hello,Good Afternoon")

print("Hello,Good Afternoon")

else:

speak("Hello,Good Evening")

print("Hello,Good Evening")
def takeCommand():

r=sr.Recognizer()

with sr.Microphone() as source:

print("Listening...")

audio=r.listen(source)

try:

statement=r.recognize_google(audio,language='en-in')

print(f"user said:{statement}\n")

except Exception as e:

speak("Pardon me, please say that again")

return "None"

return statement

speak("Loading your AI personal assistant dudu")

wishMe()

if __name__=='__main__':
while True:

speak("Tell me how can I help you now?")

statement = takeCommand().lower()

if statement==0:

continue

if "good bye" in statement or "ok bye" in statement or "stop" in statement:

speak('your personal assistant dudu is shutting down,Good bye')

print('your personal assistant dudu is shutting down,Good bye')

break

if 'wikipedia' in statement:

speak('Searching Wikipedia...')

statement =statement.replace("wikipedia", "")

results = wikipedia.summary(statement, sentences=3)

speak("According to Wikipedia")

print(results)

speak(results)

elif 'open youtube' in statement:

webbrowser.open_new_tab("https://www.youtube.com")
speak("youtube is open now")

time.sleep(5)

elif 'open google' in statement:

webbrowser.open_new_tab("https://www.google.com")

speak("Google chrome is open now")

time.sleep(5)

elif "weather" in statement:

api_key="8ef61edcf1c576d65d836254e11ea420"

base_url="https://api.openweathermap.org/data/2.5/weather?"

speak("whats the city name")

city_name=takeCommand()

complete_url=base_url+"appid="+api_key+"&q="+city_name

response = requests.get(complete_url)

x=response.json()

if x["cod"]!="404":

y=x["main"]

current_temperature = y["temp"]

current_humidiy = y["humidity"]

z = x["weather"]
weather_description = z[0]["description"]

speak(" Temperature in kelvin unit is " +

str(current_temperature) +

"\n humidity in percentage is " +

str(current_humidiy) +

"\n description " +

str(weather_description))

print(" Temperature in kelvin unit = " +

str(current_temperature) +

"\n humidity (in percentage) = " +

str(current_humidiy) +

"\n description = " +

str(weather_description))

else:

speak(" City Not Found ")

elif 'time' in statement:

strTime=datetime.datetime.now().strftime("%H:%M:%S")

speak(f"the time is {strTime}")


elif 'who are you' in statement or 'what can you do' in statement:

speak('I am G-one version 1 point O your persoanl assistant. I am programmed to


minor tasks like'

'opening youtube,google chrome,gmail and stackoverflow ,predict time,take a


photo,search wikipedia,predict weather'

'in different cities , get top headline news from times of india and you can ask me
computational or geographical questions too!')

elif "who made you" in statement or "who created you" in statement or "who discovered
you" in statement:

speak("I was built by Mirthula")

print("I was built by Mirthula")

elif "open stackoverflow" in statement:

webbrowser.open_new_tab("https://stackoverflow.com/login")

speak("Here is stackoverflow")

elif 'news' in statement:

news =
webbrowser.open_new_tab("https://timesofindia.indiatimes.com/home/headlines")

speak('Here are some headlines from the Times of India,Happy reading')

time.sleep(6)
elif "camera" in statement or "take a photo" in statement:

ec.capture(0,"robo camera","img.jpg")

elif 'search' in statement:

statement = statement.replace("search", "")

webbrowser.open_new_tab(statement)

time.sleep(5)

elif 'ask' in statement:

speak('I can answer to computational and geographical questions and what question
do you want to ask now')

question=takeCommand()

app_id="R2K75H-7ELALHR35X"

client = wolframalpha.Client('R2K75H-7ELALHR35X')

res = client.query(question)

answer = next(res.results).text

speak(answer)

print(answer)

elif "log off" in statement or "sign out" in statement:

speak("Ok , your pc will log off in 10 sec make sure you exit from all applications")

subprocess.call(["shutdown", "/l"])
time.sleep(3)
D. SAMPLE INPUT

Date.py

import datetime

def date():
"""
Just return date as string
:return: date if success, False if fail
"""
try:
date = datetime.datetime.now().strftime("%b %d %Y")
except Exception as e:
print(e)
date = False
return date

def time():
"""
Just return time as string
:return: time if success, False if fail
"""
try:
time = datetime.datetime.now().strftime("%H:%M:%S")
except Exception as e:
print(e)
time = False
return time

google search,py

from selenium import webdriver


from selenium.webdriver.common.keys import Keys
import re, pyttsx3

def speak(text):
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voices', voices[0].id)
engine.say(text)
engine.runAndWait()
engine.setProperty('rate', 180)
def google_search(command):

reg_ex = re.search('search google for (.*)', command)


search_for = command.split("for", 1)[1]
url = 'https://www.google.com/'
if reg_ex:
subgoogle = reg_ex.group(1)
url = url + 'r/' + subgoogle
speak("Okay sir!")
speak(f"Searching for {subgoogle}")
driver = webdriver.Chrome(
executable_path='driver/chromedriver.exe')
driver.get('https://www.google.com')
search = driver.find_element_by_name('q')
search.send_keys(str(search_for))
search.send_keys(Keys.RETURN)

Wikipedia.py

import wikipedia
import re

def tell_me_about(topic):
try:
# info = str(ny.content[:500].encode('utf-8'))
# res = re.sub('[^a-zA-Z.\d\s]', '', info)[1:]
res = wikipedia.summary(topic, sentences=3)

return res
except Exception as e:
print(e)
return False
Youtube.py

import webbrowser, urllib, re


import urllib.parse
import urllib.request

domain = input("Enter the song name: ")


song = urllib.parse.urlencode({"search_query" : domain})
print("Song" + song)

# fetch the ?v=query_string


result = urllib.request.urlopen("http://www.youtube.com/results?" + song)
print(result)

# make the url of the first result song


search_results = re.findall(r'href=\"\/watch\?v=(.{4})',
result.read().decode())
print(search_results)

# make the final url of song selects the very first result from youtube result
url = "http://www.youtube.com/watch?v="+str(search_results)

# play the song using webBrowser module which opens the browser
# webbrowser.open(url, new = 1)
webbrowser.open_new(url)

Webside.py

import webbrowser

def website_opener(domain):
try:
url = 'https://www.' + domain
webbrowser.open(url)
return True
except Exception as e:
print(e)
return False
E. SAMPLE OUTPUT

Display.py
Future Enhancement

A voice assistant is a digital assistant that utilizes voice recognition, language


processing algorithms, and voice synthesis. Its purpose is to listen to specific voice
commands and respond by providing relevant information or performing requested functions.

Future:

Command Execution:
Voice assistants can perform a wide range of tasks, including:

 Opening apps
 Sending messages
 Making calls
 Playing music
 Checking the weather
 Controlling smart devices
 Setting timers and alarms
 Providing general information

You might also like