Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

ADVANCED INTELLIGENT PERSONAL ASSISTANT

A
MAJOR PROJECT REPORT

Submitted in partial fulfillment of the requirements


for the award of the degree of

BACHELOR OF TECHNOLOGY
In
Master of Computer Applications
Submitted to

RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA,


BHOPAL (M.P.)

Submitted by
Bhagwat Singh Lowanshi 0115CA201010
Anjali Singh 0115CA201002

Under the Guidance of

PROF. Dev Nagar

DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS


NRI INSTITUTE OF INFORMATION SCIENCE & TECHNOLOGY BHOPAL
June – 2022
NRI INSTITUTE OF INFORMATION SCIENCE &
TECHNOLOGY BHOPAL
DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

DECLARATION

We hereby declare that the Project entitled “Advanced Intelligent Personal assistant" is
our own work conducted under the supervision of Prof. Dev Nagar, Department of Master of
Computer Applications at NRI Institute of Information Science & Technology, Bhopal.

We further declare that to the best of our knowledge this report does not contain any part of
work that has been submitted for the award of any degree either in this institute or in other
institute without proper citation.

Bhagwat Singh 0115CA201010


Anjali Singh 0115CA201002

NRI INSTITUTE OF INFORMATION SCIENCE &


TECHNOLOGY BHOPAL
DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

CERTIFICATE

This is to certify that the work embodied in this project entitled “Advanced
Intelligent Personal Assistant" being submitted by Mr. Bhagwat Singh
Lowanshi (0115CA201010), Ms. Anjali Singh (0115CA201002), in partial
fulfillment of the requirement for the award of the degree of the Master of
computer applications to Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal
(M.P.) is a record of bonafide piece of work, carried out by them under our
supervision and guidance in the Department of Master of Computer Application and
, NRI Institute of Information Science and Technology, Bhopal
(M.P.).

Guided By Approved By

Prof. Dev Nagar

Assistant Professor Prof. Shatendra Kumar Dubey


Master of Computer Application Master of Computer Application
NIIST, Bhopal NIIST, Bhopal

Director / Principal
NIIST, Bhopal
ACKNOWLEDGEMENT
The completion of this project work could be possible with continued &dedicated
efforts & guidance of large number of faculty & staff members of the Institute. We
acknowledge our gratitude to all of them. The acknowledgement however will be
incomplete with out specific mention as follows:We express our profound sense of
gratitude to our project guide ( Mr. Dev Prasad) and Project Coordinator Mr. Dev
Prasad for their continuous encouragements & guidance during the project period.
SUBMITTED BY: Anjali Singh(0115CA201002) Bhagwat Singh
Lowanshi(0115CA201010)

ABSTRACT
In this modern era, day to day life became smarter and interlinked with technology.
We already know some voice assistance like google, Siri. etc. Now in our voice
assistance system, it can act as a basic medical prescriber, daily schedule reminder,
note writer, calculator and a search tool. This project works on voice input and
give output through voice and displays the text on the screen. The main agenda of
our voice assistance makes people smart and give instant and computed results.
The voice assistance takes the voice input through our microphone (Bluetooth and
wired microphone) and it converts our voice into computer understandable
language gives the required solutions and answers which are asked by the user.
This assistance connects with the world wide web to provide results that the user
has questioned. Natural Language Processing algorithm helps computer machines
to engage in communication using natural human language in many forms.

TABLE OF CONTENTS

Page No.
Declaration i
Acknowledgement ii
Certificate iii
Abstract iv
Chapter – 1
Introduction
1.1 Introduction
1.2 Objective of system
1.3 Scope of system
1.4 Proposed system
1.5 Existing system
1.6 Module description

Chapter - 2
2.1 User requirement
2.2 System requirement
2.3 Technologies used
2.4 Data Flow diagram
2.5 Activity diagram
2.6 Architecture diagram
2.7 E-R diagram
2.8 Phases for implementation

Chapter – 3
System Testing
3.1 Coding
3.2 Test Procedure
3.2.1 Unit testing
3.2.2 Integration testing
3.2.3 System testing
3.2.4 Regression testing
3.3 Test cases
3.3.1 Generic test cases for notepad
3.3.2 Test case for new file
3.3.3 Test case for undo file option
3.3.4 Test case for cut option
3.3.5 Test case for paste option
3.3.6 Test case for save option
3.3.7 Test case for exit option

Chapter – 4
Results and Applications
4.1 Applications
4.1.1 Applications of advance notepad
4.2 Results
4.2.1 Snapshots

Chapter – 5
5.1 Conclusion
5.2 Future scope

References
CHAPTER 1
INTRODUCTION
1.1 Introduction

Today the development of artificial intelligence (AI) systems that can organize a
natural human machine interaction (through voice, communication, gestures,
facial expressions, etc.) are gaining in popularity. One of the most studied and
popular was the direction of interaction, based on the understanding of the machine
by the machine of the natural human language. It is no longer a human who learns
to communicate with a machine, but a machine learns to communicate with a
human, exploring his actions, habits, behaviour and trying to become his
personalized assistant. Virtual assistants are software programs that help you ease
your day to day tasks, such as showing weather reports, creating remainders,
making shopping lists etc. They can take commands via text (online chatbots) or
by voice. Voice-based intelligent assistants need an invoking word or wake word
to activate the listener, followed by the command. We have so many virtual
assistants, such as Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana. This
system is designed to be used efficiently on desktops. Personal assistants software
improves user productivity by managing routine tasks of the user and by providing
information from an online source to the user. This project was started on the
premise that there is a sufficient amount of openly available data and information
on the web that can be utilized to build a virtual assistant that has access to making
intelligent decisions for routine user activities. Keywords: Intelligent Personal
Assistant Using Python, AI, Digital assistance, Virtual Assistance, Python.
1.2 Objective Of System :
Voice control, also called voice assistance, is a user interface that allows hands-
free operation of a 10 digital device. Voice control does not require an internet
connection to work. Communication is one way (person to device) and all
processing is done locally. 1.1Voice control is an assistive technology that's built
into most mobile operating systems. Typically, the feature is not turned on by
default. Instead, the user has to look for it in settings and turn it on. The technology
is designed to work with Siri, Alexa, Google Assistant and Cortana and can
perform many of the same tasks. Popular commands for voice control on a mobile
phone include: Swipe right Scroll down Turn up volume Mute sound Open
_______ (app) Go back Take screenshot Rotate to landscape Make emergency call.
Voice control uses natural language processing and speech synthesis to provide aid
to users. Some operating systems allow the user to customize commands.
1.3 Scope Of System :
Each company developer of the intelligent assistant applies his own specific
methods and approaches for development, which in turn affects the final product.
One assistant can synthesize speech more qualitatively, another can more
accurately and without additional explanations and corrections perform tasks,
others can perform a narrower range of tasks, but most accurately and as the user
wants. Obviously, there is no universal assistant who would perform all tasks
equally well. The set of characteristics that an assistant has depends entirely on
which area the developer has paid more attention to. Since all systems are based on
machine learning methods and use for their creation huge amounts of data
collected from various sources and then trained on them, an important role is
played by the source of this data, be it search systems, various information sources
or social networks. The amount of information from different sources determines
the nature of the assistant, which can result as a result. Despite the different
approaches to learning, different algorithms and techniques, the principle of
building such systems remain approximately the same. Figure 1 shows the
technologies that are used to create intelligent systems of interaction with a human
by his natural language. The main technologies are voice activation, automatic
speech recognition, Teach-To-Speech, voice biometrics, dialogue manager, natural
language understanding and named entity recognition.
1.4 Proposed System :
Propose system is very good system compared to existing system. There are lot of
more functionality will be added in latest system . That functionality are very
useful for the user to work on it.
This system saves the valuable time of the user and give the user more controls to
change user interface of the system. This Advanced Notepad also provide the
online great support like we can perform automation in cars homes etc. and we can
send messages and emails to friend . To work on this system is very easy because
of advanced added functionality.

Chapter – 2
2.1 System Analysis It is a process of collecting and interpreting facts, identifying
the problems ,and decomposition of a system into its components. System analysis
is conducted forth purpose of studying a system or its parts in order to identify its
objectives. It is a problem solving technique that improves the system and ensures
that all the components of the system work efficiently to accomplish their purpose.
The objective of the system analysis activity is to develop structured system
specification for the proposed system. The structured system specification should
describe what the proposed system would do; independent of the technology,
which will be used to implement these requirements. The structured system
specification will be used to implement these requirements. The essential model
may itself consist of multiple models, modeling different aspect of the system. The
data flow diagrams may model the data and there relationship and the state
transition diagram may model time dependent behavior of the system. The
essential model thus consists of the following. • Context diagram • Dataflow
diagrams • Process specification for elementary bubbles • Data dictionary for the
flow and stores on the DFDs

Chapter 3
SOFTWARE AND HARDWARE REQUIREMENT SPECIFICATION
3.1 Software Requirement Python is a general-purpose interpreted, interactive,
object-oriented, and high-level programming language. It was created by Guido
van Rossum during 1985- 1990. Like Perl, Python source code is also available
under the GNU General Public License (GPL). This tutorial gives enough
understanding on Python programming language. Why to Learn Python? Python is
a high-level, interpreted, interactive and object-oriented scripting language. Python
is designed to be highly readable. It uses English keywords frequently where as
other languages use punctuation, and it has fewer syntactical constructions than
other languages. Python is a MUST for students and working professionals to
become a great Software Engineer specially when they are working in Web
Development Domain. I will list down some of the key advantages of learning
Python: Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is similar to
PERL and PHP. Python is Interactive − You can actually sit at a Python prompt
and interact with the interpreter directly to write your programs. Python is Object-
Oriented − Python supports Object-Oriented style or technique of programming
that encapsulates code within objects. Python is a Beginner's Language − Python is
a great language for the beginner-level programmers and supports the development
of a wide range of applications from simple text processing to WWW browsers to
games. How to build your own AI personal assistant using Python? An AI personal
assistant is a piece of software that understands verbal or written commands and
completes task assigned by the client. It is an example of weak AI that is it can
only execute and perform quest designed by the user. Want to build your own
personal AI assistant like Apple Siri, Microsoft Cortana and Google assistant? You
can check out this blog to build one in a few simple steps! With the python
programming language, a script most commonly used by the developers can be
used to build your personal AI assistant to perform task designed by the users.
Now, let’s write a script for our personal voice assistant using python.

Skills:

The implemented voice assistant can perform the following task it can open YouTube, Gmail,

Google chrome and stack overflow. Predict current time, take a photo, search Wikipedia to abstract

required data, predict weather in different cities, get top headline news from Times of India and can

answer computational and geographical questions too.

The following queries of the voice assistant can be manipulated as per the users need.

Packages required:

To build a personal voice assistant it’s necessary to install the following packages in your system

using the pip command.

16
1) Speech recognition — Speech recognition is an important feature used in house automation and

in artificial intelligence devices. The main function of this library is it tries to understand whatever

the humans speak and converts the speech to text.

2) pyttsx3 — pyttxs3 is a text to speech conversion library in python. This package supports text to

speech engines on Mac os x, Windows and on Linux.

3) wikipedia — Wikipedia is a multilingual online encyclopedia used by many people from

academic community ranging from freshmen to students to professors who wants to gain

information over a particular topic. This package in python extracts data’s required from Wikipedia.

4) ecapture — This module is used to capture images from your camera

5) datetime — This is an inbuilt module in python and it works on date and time

6) os — This module is a standard library in python and it provides the function to interact with

operating system

7) time — The time module helps us to display time.

8) Web browser — This is an in-built package in python. It extracts data from the web.
9) Subprocess — This is a standard library use to process various system commands like to log off
or to restart your PC.

10) Json- The json module is used for storing and exchanging data.

11) request- The request module is used to send all types of HTTP request. Its accepts URL as
parameters and gives access to the given URL’S.

12) wolfram alpha — Wolfram Alpha is an API which can compute expert-level answers using
Wolfram’s algorithms, knowledge base and AI technology. It is made possible by the Wolfram
Language.
17
13)Python version- We use python language to built our project. And we worked on Python
3.7.4. version

Implementation:

Import the following librarie

3.2 Specification Requirement

External Interfaces
- This interface will be actual interface through which the user will communication
with the application and perform the desired tasks.

import speech_recognition as sr

import pyttsx3

import datetime

import wikipedia

import webbrowser

import os

import time

import subprocess

from ecapture import ecapture as ec

import wolframalpha

import json

import requests

18
Setting up the speech engine:

The pyttsx3 module is stored in a variable name engine.


Sapi5 is a Microsoft Text to speech engine used for voice recognition.
The voice Id can be set as either 0 or 1,
0 indicates Male voice
1 indicates Female voice

print('Loading your AI personal assistant - G One')


engine=pyttsx3.init('sapi5')
voices=engine.getProperty('voices')
engine.setProperty('voice','voices[1].id')

Now define a function speak which converts text to speech. The speak function takes the text as its

argument,further initialize the engine.

runAndWait: This function Blocks while processing all currently queued commands. It Invokes

callbacks for engine notifications appropriately and returns back when all commands queued before

this call are emptied from the queue.

def speak(text):
engine.say(text)
engine.runAndWait()

19
Initiate a function to greet the user:

Define a function wishMe for the AI assistant to greet the user.


The now().hour function abstract’s the hour from the current time.
If the hour is greater than zero and less than 12, the voice assistant wishes you with the message
“Good Morning”.
If the hour is greater than 12 and less than 18, the voice assistant wishes you with the following
message “Good Afternoon”.
Else it voices out the message “Good evening”

def wishMe():
hour=datetime.datetime.now().hour
if hour>=0 and hour<12:
speak("Hello,Good Morning")
print("Hello,Good Morning")
elif hour>=12 and hour<18:
speak("Hello,Good Afternoon")
print("Hello,Good Afternoon")
else:
speak("Hello,Good Evening")
print("Hello,Good Evening")

20
Setting up the command function for your AI assistant :

Define a function takecommand for the AI assistant to understand and to accept human language.

The microphone captures the human speech and the recognizer recognizes the speech to give a

response. The exception handling is used to handle the exception during the run time error and,

the recognize_google function uses google audio to recognize speech.

def takeCommand():
r=sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
audio=r.listen(source)

try:
statement=r.recognize_google(audio,language='en-in')
print(f"user said:{statement}\n")

except Exception as e:
speak("Sorry , please say that again")
return "None"
return statement

speak("Loading your AI personal assistant G-One")


wishMe()

21
The Main function:

The main function starts from here,the commands given by the humans is stored in the

variable statement.

if __name__=='__main__':

while True:
speak("Tell me how can I help you now?")
statement = takeCommand().lower()
if statement==0:
continue

If the following trigger words are there in the statement given by the users it invokes the virtual

assistant to speak the below following commands.

if "good bye" in statement or "ok bye" in statement or "stop" in statement:


speak('your personal assistant G-one is shutting down,Good bye')
print('your personal assistant G-one is shutting down,Good bye')
break

22
Skill 1 -Fetching data from Wikipedia:

The following commands helps to extract information from wikipedia.

The wikipedia.summary() function takes two arguments, the statement given by the user and how

many sentences from wikipedia is needed to be extracted is stored in a variable result.

if 'wikipedia' in statement:
speak('Searching Wikipedia...')
statement =statement.replace("wikipedia", "")
results = wikipedia.summary(statement, sentences=3)
speak("According to Wikipedia")
print(results)
speak(results)

23
24
Skill 2 -Accessing the Web Browsers — Google chrome , G-Mail and YouTube:

The web browser extracts data from web. The open_new_tab function accepts URL as a parameter

that needs to be accessed.The Python time sleep function is used to add delay in the execution of a

program. We can use this function to halt the execution of the program for given time in seconds.

elif 'open youtube' in statement:


webbrowser.open_new_tab("https://www.youtube.com")
speak("youtube is open now")
time.sleep(5)

elif 'open google' in statement:


webbrowser.open_new_tab("https://www.google.com")
speak("Google chrome is open now")
time.sleep(5)

elif 'open gmail' in statement:


webbrowser.open_new_tab("gmail.com")
speak("Google Mail open now")
time.sleep(5)

25
Skill 3 -Predicting time:

The current time is abstracted from datetime.now() function which displays the hour, minute and

second and is stored in a variable name strTime.

elif 'time' in statement:


strTime=datetime.datetime.now().strftime("%H:%M:%S")
speak(f"the time is {strTime}")

Skill 4 -To fetch latest news:

If the user wants to know the latest news , The voice assistant is programmed to fetch top headline

news from Time of India by using the web browser function.


26
Skill 5 -Capturing photo:

The ec.capture() function is used to capture images from your camera. It accepts 3 parameter.

Camera index — The first connected webcam will be indicated as index 0 and the next webcam

will be indicated as index 1

Window name — It can be a variable or a string. If you don’t wish to see the window, type as False

Save name — A name can be given to the image and if you don’t want to save the image, type as

false

elif 'news' in statement:


news=webbrowser.open_new_tab("https://timesofindia.indiatimes.com/home/headlines")
speak('Here are some headlines from the Times of India,Happy reading')
time.sleep(6)
elif 'covid ' in statement:
covid = webbrowser.open_new_tab("https://www.covid19india.org/")
speak('Here are the live updates of covid from covid nineteen website,Stay home stay safe')
time.sleep(6)
elif "camera" in statement or "take a photo" in statement:
ec.capture(0,"robo camera","img.jpg")

27
Skill 6-Searching data from web:

From the web browser you can search required data by passing the user statement (command) to
the open_new_tab() function.
User: Hey G-One, please search images of butterfly.The Voice assistant opens the google window &
fetches butterfly images from web.

28
elif 'search' in statement:
statement = statement.replace("search", "")
webbrowser.open_new_tab(statement)
time.sleep(5)

Skill 7- Setting your AI assistant to answer geographical and computational questions :

Here we can use a third party API called Wolfram alpha API to answer computational and

geographical questions.It is made possible by the Wolfram Language. The client is an instance

(class) created for wolfram alpha. The res variable stores the response given by the wolfram alpha.

elif 'ask' in statement:


speak('I can answer to computational and geographical questions and what question do you
want to ask now')
question=takeCommand()
app_id="R2K75H-7ELALHR35X"
client = wolframalpha.Client('R2K75H-7ELALHR35X')
res = client.query(question)
answer = next(res.results).text
speak(answer)
print(answer)

29
To access the wolfram alpha API an unique App ID is required which can be generated by the

following ways:

1.Login to the official page of wolfram alpha and create an account if you do not possess one.

Image by author

30
2. Sign in using your wolfram ID

Image by author

31
3. Now you will view the homepage of the website. Head to the account section in the top right

corner where you see your email. In the drop down menu, select the My Apps (API) option.

Image by author

32
4. You will see this following window, now click Get APP_ID button

Image by author

5. Now you will get the following dialog box, give a suitable name and description and click the

App ID button, an App ID will be generated and this is an unique ID. Using the App Id use can

access the Wolfram alpha API.

G- Human: Hey G-One ,what is the capital of California?


One Voice assistant: Sacramento, United States of America

Skill 8- Extra features:

It would be interesting to program your AI assistant to answer the following questions like what it

can and who created it, isn't it?

33
elif 'who are you' in statement or 'what can you do' in statement:
speak('I am G-one version 1 point O your personal assistant. I am programmed to minor
tasks like'
'opening youtube,google chrome,gmail and other applications ,predict time,take a
photo,search wikipedia,predict weather'
'in different cities , get top headline news from times of india and you can ask me
computational or geographical questions too!')

elif 'where is your home' in statement or 'where are you living' in statement:
speak('Currently I am living in your PC but i will definately find a rental house in the
cloud. A7 Group can rent a house for me'
'in different cities')
elif "who made you" in statement or "who created you" in statement or "who discovered you" in
statement:
speak("I was built by A7 Group")
print("I was built by A7 Group")

Skill 9- To forecast weather:

Now to program your AI assistant to detect weather we need to generate an API key from Open

Weather map.Open weather map is an online service which provides weather data. By generating an

API ID in the official website you can use the APP_ID to make your voice assistant detect weather

of all places whenever required. The necessary modules needed to be imported for this weather

detection is json and request module.

The city_name variable takes the command given by the human using

the takeCommand() function.

34
The get method of request module returns a response object. And the json methods of response

object converts json format data into python format.

The variable X contains list of nested dictionaries which checks whether the value of ‘COD’ is 404

or not that is if the city is found or not.The values such as temperature and humidity is stored in the

main key of variable Y.

elif "weather" in statement:


api_key="8ef61edcf1c576d65d836254e11ea420"
base_url="https://api.openweathermap.org/data/2.5/weather?"
speak("whats the city name")
city_name=takeCommand()
complete_url=base_url+"appid="+api_key+"&q="+city_name
response = requests.get(complete_url)
x=response.json()
if x["cod"]!="404":
y=x["main"]
current_temperature = y["temp"]
current_humidiy = y["humidity"]
z = x["weather"]
weather_description = z[0]["description"]
speak(" Temperature in kelvin unit is " +
str(current_temperature) +
"\n humidity in percentage is " +
str(current_humidiy) +
"\n description " +
str(weather_description))
print(" Temperature in kelvin unit = " +
str(current_temperature) +
"\n humidity (in percentage) = " +
str(current_humidiy) +
"\n description = " +
str(weather_description))
else:
speak(" City Not Found ")

35
Human: Hey G-One ,I want to get the weather data

G-One: What is the city name?

Human: Himachal Pradesh

G-One: Temperature in kelvin unit is 301.09 , Humidity in percentage is 52 and Description is light

rain.

36
Skill 10- To log off your PC:

elif "log off" in statement or "sign out" in statement:


speak("Ok , your pc will log off in 10 sec make sure you exit from all applications")
subprocess.call(["shutdown", "/l"])

time.sleep(3)

The subprocess.call() function here is used to process the system function to log off or to turn off

your PC. This invokes your AI assistant to automatically turn off your PC.

37
3.3 Software Product Features

Chapter 1 Description
-The system will maintain the login in formation of its user to enter into the software

Chapter 2 Validating Checks


-Administrator need to log in the unique id and password.

-Contact number should have maximum 10 digits.

-All the details must be fill up.

-Email address should be in the proper format.

Chapter 3 Sequencing information


-Login information should be filled before the user allowed.

Chapter 4 Error Handling


-If user doesn’t filled up validate information then the system display error message for user and request
to enter the validate information.

Chapter 5 Performance required

Chapter 6 Security-
SystemshouldbeProtectedfromunauthorizedaccessWherethevalidateUsernamean
dPasswordarerequiredsonoothercanaccess.

Chapter 7 Maintainability
-System should be design in a maintain order. So, it can be easily modified.

38
3.4 SYSTEM HARDWARE REQUIREMENTS

• Operating System: Windows7,10


64Bit or above.

• Processor :Dual Core or Better

• RAM :1GB or above (Min)


4GB (Max)

• Hard Disk :1TB

• Internet : Feasible Internet Connection

39
Chapter 4 -System Design

System Design

System design involves transformation of the user implementation model into software design.
The design specification of the proposed system consists of the following:

 Database scheme
 Sequence Diagram
 Flow Chart

LogicalDatabase

Data Design
Data Model : A database model is a type of data model that determines the logical
structure of a database and fundamentally determines in which manner data can be
stored, organized and manipulated.

40
Scope and Feasibility
This activity is also known as the feasibility study. It begins with a request from the user for a
new system. It involves the following:
 Identify the responsible user for an system
 Clarify the user request
 Identify deficiencies in the current system
 Establish goals and objectives for the new system
 Determine the feasibility for the new system
 Prepare a project charter that will be used to guide the remainder of the Project

Implementation

This activity include programming, testing and integration of modules into a progressively
more complete system. Implementation is the process of collect all the required parts and
assembles them into a minor product.

41
System Testing

Software testing is a process of identifying the correctness of software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution
of software components to find the software bugs or errors or defects.

Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the
client with information about the quality of the software.

Testing is mandatory because it will be a dangerous situation if the software fails any of time due
to lack of testing. So, without testing software cannot be deployed to the end user

42
Type of Software testing

We have various types of testing available in the market, which are used to test the application or
the software.

With the help of below image, we can easily understand the type of software testing:

43
Test Generation

This activity generates a set of test data, which can be used to test the new system before
accepting it. In the test generation phase all the parts are come which are to be tested to ensure
that system does not produce any error. If there are some errors then were move them and further

it goes for accepting.

44
CONCLUSION

It has been a great pleasure for us to work on this exciting and challenging project. This project
proved good for us as it provided practical knowledge of not only programming in Python and
know some extent Windows Application. It also provides knowledge about the latest technology
like AI used As our world is becoming more digital, virtual assistants using advanced Artificial
Intelligence are forming the bridge between the digital and human world. Offering consumers and
businesses support with a wide range of tasks.

What is an artificial intelligence-powered Intelligent Personal assistant?


A Intelligent Personal Assistant uses advanced Artificial Intelligence (AI), RPA, natural language
processing, and machine learning to extract information and complex data from conversations to
understand them and process them accordingly.
By combining information from the past, the algorithms are able to create data models that
recognize behavioral patterns and adapt these based on any additional data. By constantly adding
new data about the history, preferences, and other user information, the virtual assistant can answer
complex questions, make recommendations and predictions, and even start a conversation.

How do RPA, Cognitive Automation, and an Intelligent Personal assistants


relate?
Many organizations have integrated Robotic Process Automation (RPA) to take over repetitive and
mundane tasks. RPA is an intelligent software program that is very strong at performing repetitive
and rule-based tasks with highly structured data. However, this type of rule-based robot can
become overwhelmed in other situations. But, thanks to more advanced technology, new
possibilities arising. If data is more unstructured or the standardized rules can't be applied,
Cognitive Automation can offer the solution. Cognitive Automation refers to the combination of
RPA and data science approaches and is especially strong when it comes to textual content. A
virtual assistant takes this to the next level as it has a speech and text-based user interface. It has
the ability to extract information and complex data from conversations and understand them
accordingly. This offers completely new application scenarios where traditional RPA and
Cognitive Automation cannot help.

Times New Roman


Virtual assistants are like the highest level of Robotic Automation. A virtual assistant uses data
from multiple sources and places this in context, learning from each interaction. Using advanced
language processing, the virtual assistant can process everything that is said or typed and is able to
use that to formulate a correct answer. More advanced virtual assistants can process multiple tasks
and complex questions using AI and machine learning. They gain insight into one's preferences
based on previous choices and data. This way, the interaction with the virtual assistant becomes a
personalized experience that meets an individual's needs.

45
What does an Intelligent Personal Assistant do?
A Virtual Assistant can do a wide range of things. Virtual assistants used by consumers, like Alexa,
Google Home, or SIRI, are able to answer general questions and give recommendations based on
the user's profile, previous behavior, and additional behavior. But also turning on the lights,
making shopping lists, and turning off the heating while on holiday are options that a digital
assistant offers the consumer.
Organizations often use digital assistants in customer service to handle incoming communication
or internally, for example, to onboard new employees. But also, a lot of IT activities are supported
by virtual assistants. They can be used to autotomize frequent tasks like system updates,
knowledge management, and even transaction orders.

The advantages of an Intelligent Personal Assistant?


Within an organization, a virtual assistant can improve efficiency and offer support to both
employees and customers of the organization. It allows them to offer more services as digital
assistants can take over the more routine tasks. Because digital assistants do the more routine tasks,
employees can spend more time on other tasks. Not only allowing them to offer more and better
services, but it also allows the organization to save money.
This is just the beginning; as AI and machine learning continue to innovate, virtual assistants will
continue to become smarter and offer new opportunities.

46
References

 [1] R. Belvin, R. Burns, and C. Hein, “Development of the HRL route navigation dialogue
system,” in Proceedings of ACL-HLT, 2001
 [2] V. Zue, S. Seneff, J. R. Glass, J. Polifroni, C. Pao, T.J.Hazen,and L.Hetherington,
“JUPITER: A Telephone Based Conversational Interface for Weather Information,” IEEE
Transactions on Speech and Audio Processing, vol. 8, no. 1, pp. 85–96, 2000.
 [3] M. Kolss, D. Bernreuther, M. Paulik, S. St¨ucker, S. Vogel, and A. Waibel, “Open
Domain Speech Recognition & Translation: Lectures and Speeches,” in Proceedings of
ICASSP, 2006.
 [4] D. R. S. Caon, T. Simonnet, P. Sendorek, J. Boudy, and G. Chollet, “vAssist: The
Virtual Interactive Assistant for Daily Homer-Care,” in Proceedings of pHealth, 2011.
 [5] Crevier, D. (1993). AI: The Tumultuous Search for Artificial Intelligence. New York,
NY: Basic Books, ISBN 0-465-02997-3.
 [6] Sadun, E., &Sande, S. (2014). Talking to Siri: Mastering the Language of Apple’s
Intelligent Assistant.

47

You might also like