Mobile Computing Through Telephony

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 65

MOBILE COMPUTING

THROUGH TELEPHONY
Prakash Patil
B.V.B College of Engg, & Tech, Hubli
Evolution of Telephony
 First telephone system developed by Alexandra Graham
Bell.
 Allowed two way voice communication between two
individuals in two locations on either side of a wire,
 Calling Party - A person who make a call
 Called party – A person who responds to the call
 During analog telephony, the purpose of
interconnecting two subscribers was to establish a
physical connection between their respective telephone
devices.
Evolution of Telephony
 In early days, each telephone was connected to a
central place – the exchange
 From exchange the operator would manually
connect to another subscriber.
 Billing information was maintained manually.
 Trunk Call - Call some one connected to other
exchange – The call would have to be set up with a
whole chain of operators, each one calling the next
and so on.
Evolution of Telephony
 1890 – development of the first automatic
telephone exchange- called “Stronger Switch” after
its originator Almon B Strowger.
 1982 – First version of automatic exchange was
installed to eliminate the human intervention.
 1912- Swedish engineer Gotthief patented an
automatic switching system based on a Grid –
Electromechanical and called crossbar exchange.
Evolution of Telephony
 1960 – Electronic Switching System (ESS) was
developed at AT & T labs.
 1962 – Carrier system was made digital.
 1976 – Bell labs developed 4ESS toll switch for the
long distance voice network (First Digital Circuit
Switch)
 1960-70s: Telephone exchanges controlled by
processors and software.
 1962 – Carrier System was made digital
Pulse Code Modulation (PCM)
 Used for Digital Modulation
 Audio Voice – 0-4 KHz.
 Measured amplitude converted to number
(Quantization process) i.e, represented by 8 bits.
 Snapshot of voice signal amplitude is taken at
1/8000th of Second (Double the frequency of
4KHz)
 1962- Bell lab introduced digital transmission
using PCM
Manual Exchange
 Manual Exchange – operator intelligence was the
control system
 An operator, alerted to an incoming call
 Listen to and remember desired number
 Finds the right way to connect the callers line to the line
being called
 Check if the desired line is free
 Makes the connection
 Note down the call details, time of call, duration of call,
calling number and called number.
Automated Exchange
 Indicates the progress of the call to the caller
 A series of distinct tones were generated by
machine called Ring Generator
 Dial Tone (DT) – Signal applied to the line after
calling party has lifted his handset.
 Busy Tone(BT) – Indicate route to called
subscriber is congested is OFF Hook.
 Ring Tone – Tone generated after circuit is
established between 02 parties.
PSTN- Public Switched Telephone System

 Normal Telephone System


 Also called – End Office or Local Access Tandem
 Local Exchange – used for the connection of
subscriber
 Transit Exchanges – Switch traffic between within
and different geographical areas.
 Local loop: A physical cable laid from the local
exchange to the telephone device at each subscriber
place – Called Last mile link.
Multiple Access Procedures
 In PSTN- A separate wire is used to connect the
subscribers telephone with switch.
 Multiple users can have speech communication at the
same time without causing any interference to each
other.
 Unless we control the simultaneous access of radio
channel by users a collision can occur.
 Connection oriented communication the collision is
undesirable.
Multiple Access Procedures
 Every mobile subscriber must be assigned a
dedicated channel on demand.
 Achieved using different multiplexing techniques.
 FDMA- Frequency Division Multiple Access
 TDMA- Time Division Multiple Access
 CDMA- Code Division Multiple Access
 SDMA- Space Division Multiple Access
Multiple Access Procedures
FDMA- Frequency Division Multiple Access

 One of the most common Multiplexing Procedures.


 Available frequency band is divided into channels
of equal bandwidth
 Each communication carried on different
frequency.
 Used in all First generation analog mobile networks
like AMPS (Adv. Mobile Phone System) in USA
and TACS (Total Access Communication System in
UK)
TDMA- Time Division Multiple Access

 More expensive technique compared to FDMA


 Needs precise synchronization between transmitter
and receiver.
 Used in Digital Mobile Communication
 Whole frequency BW divided into sub-bands using
FDMA technique.
 TDMA is used in each of these sub-bands to offer
multiple access.
 GSM uses such a combination of FDMA and TDMA
CDMA- Code Division Multiple Access

 Broad band system


 Uses spread spectrum technique where each
subscriber uses whole system bandwidth
 All subscribers in a cell use the same frequency
band simultaneously.
 To separate the signal , each subscriber is assigned
an orthogonal code called ‘CHIP’
SDMA- Space Division Multiple Access

 Make use of space effectively


 Use different part of the space for multiplexing
 Used in radio transmission and more useful in satellite
communications to optimize the use of radio spectrum
by using the directional properties of antennas.
 In SDMA, antennas are highly directional, allowing
duplicate frequencies to be used at the same time for the
multiple surface zones on earth.
 Requires careful choice of zones for each transmitter and
also requires precise antenna alignment.
Mobile Computing through Telephone

 Accessing applications and services through voice


interface
 Referred as Computer Telephony Interface (CTI)
 Ex. Telephone Banking Application
 Input - a telephone Keyboard
 Output – Synthesized voice
Mobile Computing through Telephone

 Toll Free Service – Only one number is published


 Number is not attached to any exchange or specific
city
 Advantages
 User remember only one number
 Call same numbers from anywhere
 Numbers are generally toll free
 Need not worry about call charges
Mobile Computing through Telephone

 Interactive Voice Response (IVR)


 Voice Response Unit (VRU)
 Computer Telephony (CT)
 Computer Telephony Interface / Integration (CTI)
 IVR software can be hosted on WindowsNT, Linux
or any other computers with voice card
 One of the most popular card vendor is from
Intel /Dialogic
IVR Architecture
 IVR works as the gateway between a voice based
telephone system and computer system
 Multiple telephone lines are connected to the voice
card through telecom interface
IVR Architecture
 Call received by the voice card within IVR
 Voice card answer the call
 Establish the connection between the caller & IVR
application
IVR Architecture
 Switch can be either a PSTN exchange or local PBX
in the office
 Telephone KB has 12 keys (0,1,2,……...9, *,#)
 It is possible to enter alphabetic data through Tel. KB
IVR Architecture
 Alphabets mapping on the telephone keyboard
IVR Architecture
 It is possible to enter alphabetic data through
telephone KB by pressing a key in multiple
succession.
 Ex. DELHI entered as 3-3 (D), 3-3-3(E), 5-5-5-
5(L), 4-4-4-4(I)
 Key inputs are received by the voice card as
DTMP(Dual Tone Multi Frequency)
IVR Architecture
 Inputs generated through the combination of frequency
 Ex: User Press 2- 3Times (2-2-2 )
 Voice card will receive –
 697+1336Hz+ 697+1336Hz+ 697+1336Hz
 Looking at a time interval between the numbers the program
can decide whether the user entered 222 or B
IVR Architecture
 When application needs to send an output to the
user , the standard data is converted into voice
either through synthesizing voice files or through
Text to Speech (TTS) conversion software.
 IVR System assemble a series of prerecorded voice
prompts to generate equivalent sound response.
 TTS interface can be used to convert the text into
speech.
 Different TTS are available for different language.
Overview of the Voice Software
 Encompasses the processing and manipulation of an
audio signal in a Computer Telephony System (CT)
 Supports – Filtering, analyzing, recording, digitizing,
compressing, storing, replaying audio voice.
 Most of the voice cards come with industry standard
Peripheral Component Interface (PCI)
 PCI interface makes it possible to integrate these voice
products into Windows or Linux systems quite easily.
Overview of the Voice Software
 Encompasses the processing and manipulation of an
audio signal in a Computer Telephony System (CT)
 Supports – Filtering, analyzing, recording, digitizing,
compressing, storing, replaying audio voice.
 Most of the voice cards come with industry standard
Peripheral Component Interface (PCI)
 PCI interface makes it possible to integrate these voice
products into Windows or Linux systems quite easily.
Inside IVR
 Popular Voice card – D/41JCTLS from Dialogic for small office
 4 port analog converged communications voice, fax and software based speech
recognition board.
 Posses Dual Processor architecture
 Comprises Digital Signal Processor (DSP) and general purpose Microprocessor
 Provides four telephone line interface circuit for direct connection to analog
loop start lines RJ11 interface
Voice Driver and API
 Dialogic Voice Driver APIs
 Many vendors around the world use Dialogic cards from Intel
in IVRS system.
 Driver in an IVRS system used to communicate and control
the voice hardware on IVRs System.
 Voice driver can make calls, answer calls, identify caller ID,
play and record sound from telephone line, detect DTMF
signals dialed by the caller.
 It can tear down the call, detect when the caller has hung up.
 It offers APIs to record the transaction details.
 Transaction information required for audit trail and cahrging.
IVR Programming
 Voice libraries provided by Dialogic to interface
with the voice driver.
 Single threaded and Multithreaded
 Libdxxmt.lib – the main voice library
 Libsrlmt.lib- Therstndard run time library
IVR Programming
 Use of Libraries
 Utilize all the voice board features of call management
 Write applications using single threaded Asynchronous
or multi threaded paradigm.
 Configure devices
 Handle the events that occur on the devices.
 Return device information
 Gather call transaction details
Single Threaded Asynchronous Programming Model

 Enable single program to control multiple voice


channels within a single thread.
 Allows development of complex applications
where multiple tasks must be coordinated
simultaneously
 Supports – Polled and callback event management
Multi Threaded synchronous Programming Model

 Application controls each channel from a separate


thread or process.
 Operating system can control individual device
threads to sleep
 When dialogic function is completed , the OS
wakes up the function’s thread so that processing
continues.
 Assign distinct applications to different channels
Voice APIs
 Dialogic provides different APIs.
 APIs are available for
 device management,
 configuration function,
 input output functions,
 play and record functions,
 tone detection functions,
 tone generation functions,
 call control functions etc.
Voice APIs
 Dialogic provides different APIs.
Developing IVR application
 User interface in IVRs application is called CALL
FLOW.
 Call Flow define – How call will be managed?
 Note down the precise prompts that are played as
output.
 Prompts are generally prerecorded by people with
professional voice.
Call Flow for Theater Ticket Booking Application
VoiceXML
 In Mobile Computing through telephony, IVR is
connected to the server through the Client / Server
Architecture.
 Today Internet (http) is used in addition to client /
server interface between IVR and Server in MC.
 http is used for voice portals.
 Increase the flexibility in MC architecture.
VoiceXML
 Voice Portal – A user use an Internet site through
voice or telephone interface.
 All these advanced features introduced VoiceXML.
 Recent IVRs are equipped with DSP (Digital
Signal Processing) & Capable of recognizing
voice.
 Output is synthesized voice through TTS.
VoiceXML- Voice eXtensible Markup Language

 XML based markup language for creating distributed


voice applications.
 Designed for creating audio dialogs.
 We can create web based voice applications that user
can access through telephone
 Features of VoiceXML
 Synthesized Speech
 Digitized Audio
 Recognition of Spoken English
 DTMF Key Input
Architectural Model
 A document server (Web server)
 Applications run on VoiceXML interpreter context
 Server delivers VoiceXML documents which are processed by
the VoiceXML interpreter.
Architectural Model
 VoiceXML interpreter Context:
 Responsible for detecting an incoming call
 Acquiring the initial voice XML document
 Answering the call
How Voice XML Fits into Web Environment

 Visual GUI web browser renders and interprets http


requests to present information to the user in text,
multimedia, audio etc.
 The voice browser extends this paradigm.
 Voice server has been added to the web environment.
How Voice XML Fits into Web Environment

 The Voice Server manages several Voice Browser


Sessions.
 Each Voice browser session includes one instance of
the Voice Browser, the speech recognitions engine, and
Text-to-Speech engine.
How Voice XML Fits into Web Environment

 The voice browser presents the information to the


caller into audio using VoiceXML.
 When caller says something, the voice browser sends
HTTP request to the web server and information is
returned in Audio.- Called Voice Portal.
The Voice Browser
 Using voice browser, we can interact with web
server using our voice and a telephone.
 Voice browsers renders and interprets VoiceXML
document.
 We use voice and telephone (even phone keypad)
to access web information and services.
Dialogs
 A VoiceXML application defines a series of dialogs
between user and computer.
 Two types if dialogs that can be implemented in
VoiceXML
 Forms – Collects values for a set of fields
 Menus – presents user with choices or option and
transition to another dialog based on the choice.
Essential Elements of Voice XML Documents

 First line contains <?xml> element


 Second line contains <vxml> element
Prompts
 In Voice XML application, information is presented
to the user through Audio Prompts.
 Prompts can be prerecorded audio or synthesized
speech.
 Use prompt element in VoiceXML to generate TTS.
 Any text within the body of a <prompt> element is
spoken.
Grammar
 Each dialog has one or more speech and/or DTMF
grammars associated with it.
 In VoiceXML, <grammar> element is used to
define what the caller can say to the application at
any given time.
 Three different types of grammars supported.
 Inline
 External
 Built-in
Grammar-inline Grammars
 Inline Grammars: defined in the VoiceXML code
 Example:
 Words and phrases that caller is allowed to say is
defined within the body of the <grammar> element
 Each word is separated by “|” means OR.
Grammar-External Grammars
 External Grammar – are those specified outside of the
VoiceXML code.
 Document is in another file and referenced from within the
voiceXML code.
 <grammart> element is used to specify an external grammar.
 Example:
Form
 Form is one of the ways of developing a dialog
with the caller in VoiceXML.
 A form is, basically, a collection of one for more
fields that caller fills in by saying something.
 Forms are central to VoiceXML,
 A VoiceXML form is a process to present
information and gather input from the caller.
 In case of VoiceXMl, we can’t see the fields,
instead of typing into the field, we say something
to fill it in.
Form Example
 In VoiceXML forms are define using <form> element and
fields within the form element using the <field> element.
Events
 VoiceXML provides a form filling mechanism for
handling ‘normal’ user input.
 VoiceXML defines a mechanism for handling
events not covered by the form mechanism.
 Events are thrown by the platform under a variety
of circumstances, such as when the user does not
respond, does not respond intelligibly, request help
etc.

Links
 A link supports a mixed initiative.
 It specifies a grammar that is active whenever the
user is in scope of link.
 If the user input matches the link’s grammar
control transfer to the link destination URI
 A <link>can be used to throw an event to go to the
destination URI.
VoiceXML Elements
VoiceXML Elements
VoiceXML Elements
VoiceXML Elements
Telephony Application Programming Interface (TAPI)

 TAPI – Higher level framework for developing


voice services.
 SAPI- Speech Application Programming Interface
 SAPI and TAPI are two standards that can be used
uwhen developing voice telephony application.
 Developed jointly by Intel and Microsoft.
Telephony Application Programming Interface (TAPI)

 Advantages- Programmers can use different


telephone systems- PSTN, ISDN, and PBX(Private
Branch Exchange) without having to understand all
their details
 Use of API will save the programmers time and
pain of trying to program hardware directly
 Through TAPI and SAPI program can talk over
telephone or video phones or phone connected
resources.
Telephony Application Programming Interface (TAPI)

 Simple user interface to setup calls – calling some


one clicking the picture
 Simple GUI to setup a conference call
 See who you are talking
 Attach voice greeting with email
 Send and Receive faxes
Questions and Assignments

You might also like