Professional Documents
Culture Documents
Multimedia Communication
Multimedia Communication
Telephone Networks
1.1 Basics of Switching Systems and Telephone Networks ..................................2
1.1.1 Elements of Switching system.................................................................2
1.2 Signaling Tones:............................................................................................5
1.3 Switching System Classification:...................................................................6
1.4 Switching Network Configuration: .................................................................8
1.5 Stored Program Control.................................................................................9
1.5.1 Centralized SPC.......................................................................................9
(a) Standby mode...........................................................................................9
(b) Synchronous duplex mode ......................................................................10
(c) Load sharing mode .................................................................................10
1.5.2 Distributed SPC .....................................................................................11
1.6 Two (2) Stage Network .................................................................................11
1.7 Subscriber Loop System ..............................................................................12
1.8 Switching Hierachy And Routing .................................................................12
1.9 Signaling Techniques ..................................................................................13
1.10 Signaling Techniques Comparision ............................................................13
1.10.1 Network Traffic Load And Its Parameters.............................................13
1.11 GOS (Grade of Service)...............................................................................14
1.12 EPABX: (Electronic Private Automatic Branch Exchange) ..........................15
2 Multimedia Communication (3361106)
Telecommunication system can be divided into four main parts. They are
End Systems or Instruments. The end system or instruments are a transmitter or receiver that
are responsible for sending information or decoding or inverting received information or
message into an intelligible message. End systems in the telephone network have evolved from
analog telephones to digital handsets and cellular phones. However, endless arrays of other
devices are being attached to telephone lines, including computer terminals used for data
transmission. Fig. 1.3 shows some of the end instruments.
Transmission System. Signals generated by the end system or the instruments should be
transported to the destination by some means. The transmission on links conveys the information
and control signals between the terminals and switching centers. A transmission link can be
characterized by its bandwidth, link attenuation and the propagation delay. To maintain signal
quality, the signal must be regenerated after a certain distance. In general a communication path
between two distinct points can be setup be connecting a number of transmission lines in tandem.
The transmission links include two-wire lines, coaxial cables microwave radio, optical fibers and
satellites. Functionally, the communication channels between switching system are referred to as
trunks. Fig. 1.4 shows the various possible transmission media.
Switching System. The switching centers receives the control signals, messages or
conversations and forwards to the required destination, after necessary modification (link
amplifications) if necessary. A switching system is a collection of switching elements arranged
and controlled in such a way as to setup a communication path between any two distant points. A
switching center of a telephone network comprising a switching network and its control and
support equipment is called a central office. In computer communication, the switching
technique used is known as packet switching or message switch (store and forward switching). In
telephone network the switching method used is called circuit switching. Some practical
switching system are step-by-step, cross barred relay system, digital switching systems,
electronic switching system etc.
A signal tone or signaling tone is a steady periodic sound used to indicate a condition, for
example on a telephone line or as an audible warning.
In telephone systems, these tones are in-band of frequency indicators to subscribers, as opposed
to in-band and out-of-band switching signaling (telecommunications) tones. Typical major signal
tones are dial tone, ringing tone, busy tone, and number unavailable tone. A loud stutter warning
signal tone is used to alert subscribers of an engaged telephone circuit, warning that the other
party has disconnected as a reminder to indicate the handset is not properly hung up.
Dial Tone: This tone indicates that the exchange is ready to accept dialed digits from the
subscriber.
Ringing Tone: Usually the exchange control equipment sends out the ringing signal to the
telephone set of called party. As these signal is continues in pattern capered to other signal.
Busy Tone: A busy tone is sent to the calling subscriber whenever the switching path or junction
line is not available for throughput the call or the called subscriber line is busy (engaged).
Number unavailable Tone: This tone is send to calling subscriber due to variety of reasons like,
called party line is out of order or disconnected and error in dialing leading to the selection of a
spare line.
Routing Tone Or Call-In-Progress Tone: The routing tone or call-in-progress tone is a 400 Hz
or 800 Hz intermittent pattern. In electromechanical systems, it is usually 800 Hz with 50 per
cent duty ratio and 0.5s on/off period. In analog electronic exchanges it is a 400 Hz pattern with
0.5s on period and 2.5s off period.
Manual Switch was operated oriented and hence it had limited services with less priority.
Electromechnical switch were have two parts Strowger and Crossbar. In Strowger switch Instead
of dedicating an expensive first-stage selector switch to each customer as in the first exchange,
the customer was given access to the first-stage switch of a telephone network, often by a line-
finder which searched "backward" for the calling line; so requiring only a few relays for the
equipment required for each customer line. The Crossbar Switch. A crossbar switch is a switch
connecting multiple inputs to multiple outputs in a matrix manner.
Switching System
Manual Automatic
Strowger
Space Division Time Division
Spcae Switch
Time Switch
Cobination Switch
A signal tone or signaling tone (or signaling tone) is a steady periodic sound (not necessarily a
pure tone) used to indicate a condition, for example on a telephone line or as an audible warning.
Switching
N M
Inlets Network Inlets
As shown in figure basic switching network is shown fig (a) with input and output line known as
inlets and outlets respectively in telecommunication.
The hardware used for such a switching operation is known as switching network. Where N
denotes inlets and M denotes outlets. Such network with N inlets and M outlets is known as
symmetric network as shown in fig (b).
The inlets/outlets are connected back to the subscriber line then that type of switching network is
called “folded back network” as shown in fig (c). In folded network having N subscriber then
there can be maximum N/2 simultaneous switching path available.
The network having only N/2 inlets and outlets then network is said to be Non-blocking, as long
as subscriber line are free. If there are no switching paths available free and denied the
connection. In such case the network is said to be blocking network.
Switching network having more inlets as compared to outlets, then there will be probability that
user may block due to the less outlets. That probability is known as Blocking Probability.
Exchanged are designed such that handle maximum traffic in particular time period. The
maximum traffic handle by the exchange during an hour is known as “Busy hour Traffic”. In
telecommunication call traffic intensity is measured in unit Erlang (E).
1. Standby mode
2. Synchronous duplex mode
3. Load sharing mode
Exchange Environment
EM & DP EM & DP
CP CP
O&MP O&MP
Distributed SPC is both more available and more reliable than centralized SPC.
Vertical decomposition:
Whole exchange is divided into several blocks and a processor is assigned to each block.
This processor perform all the task related to that specific blocks. Therefore, the total
control system consists of several control units coupled together. For redundancy purpose
processor may be duplicated in each blocks.
Horizontal Decomposition:
In this type of decomposition each processor performs only one or some exchange
function.
.
.
For any single stage network there exists an equivalent multistage network as shown in above
figure.
So NxN single stage network with capacity k can be realized by a two stage network of NxK and
KxN stages.
Routing Methods:
• Right-through routing
• Own-exchange routing
• Computer- controlled routing
Signaling
Common
In channel
Channel
Voice Low
D.C. Associated Nonassociated
Frequency Frequency
PCM
Inband Outband
• The signaling techniques link the variety of switching systems, transmission systems and
subscriber equipment’s in a telecommunication network to enable the network to function
as a whole.
• There are three forms of signaling involved in a telecommunication network:
• Subscriber loop signaling
• Interexchange or register signaling
• Interexchange signaling
In channel signaling is also known as per trunk signaling and it uses the same channel which
carries user voice or data to pass control signals related to that call or connection. No additional
facilities are required.
Common Channel signaling (CCS) does not use the speech or data path for signaling. It uses a
separate common channel for passing control signals for a group of trunks or information paths.
figure. Generally there is a peak of calls around 10.00 hours just before people leave their homes
on outings and another peak occurs in the evening.
probability loss, whereas blocking probability as time congestion (time jam, because showing
part from time which all server or busy line).
EPABX is an abbreviation that stands for Electronic Private Automatic Branch Exchange. It
comes under the category of business phone systems which serve a business environment. Multi-
line connections can be made through a single telephonic connection. It is an office equipment of
immense use for telephonic connectivity with extensions of a single phone line. It can be rented
from office equipment suppliers too.
√
The value of e could be determined by: =
From figure 2.2 and considering the law stated above, if satellite travels distances S1 and S2 meters
in 1 second, then areas A1 and A2 will be equal.
The same area will be covered everyday regardless of where in its orbit a satellite is. As the First
Kepler’s law states that the satellite follows an elliptical orbit around the primary, then the satellite
is at different distances from the planet at different parts of the orbit. Hence the satellite has to
move faster when it is closer to the Earth so that it sweeps an equal area on the Earth.
This could be achieved if the speed of the satellite is adjusted when it is closer to the surface of the
Earth in order to make it sweep out equal areas (footprints) of the surface of the Earth.
(c) Kepler’s Third Law
“The square of the periodic time of orbit is proportional to the cube of the mean distance between
the two bodies. “
The square of the orbital period of a planet is directly proportional to the cube of the semi-major
axis of its orbit. This law shows the relationship between the distances of satellite from earth and
their orbital period.
The mean distance is equal to the semi major axis a. For the artificial satellites orbiting the earth,
Kepler’s third law can be written in the form
Where n is the mean motion of the satellite in radians per second and is the earth’s geocentric
gravitational constant. Its value is
= 3.986005 x 1014 m3 / s2
With n in radians per second, the orbital period in seconds is given by
2
=
Apogee: A point for a satellite farthest from the Earth. It is denoted as ha.
Perigee: A point for a satellite closest from the Earth. It is denoted as hp.
Line of Apsides: Line joining perigee and apogee through centre of the Earth. It is the major axis
of the orbit. One-half of this line’s length is the semi-major axis equivalents to satellites mean
distance from the Earth.
Ascending Node: The point where the orbit crosses the equatorial plane going from north to south.
Descending Node: The point where the orbit crosses the equatorial plane going from south to north.
Subsatellite path: This is the path traced out on the earth’s surface directly below the satellite.
The attitude and orbit control subsystem (AOCS) provides attitude information and maintains the
required spacecraft attitude during all phases of the mission, starting at spacecraft separation from
the launch vehicle and throughout its operational lifetime. The subsystem consists of redundant
microprocessor-based control electronics, sun and earth sensors, gyros, momentum wheels
(MWs), a reaction wheel (RW), magnetic torques, thrusters, and solar array and trim tab
positioners.
Normal on-orbit attitude control operations are based on a momentum bias concept that provides
precise pointing of the Imager and Sounder, communications service equipment, and scientific
instruments. Control is accomplished by applying torque to internal MWs and the RW or by
modulating the current applied to roll and/or yaw magnetic torqueing coils. Attitude control during
orbit maneuvers is provided by twelve 22-N bipropellant thrusters. Control during transfer orbit
uses thrusters only, without momentum bias. The fully redundant AOCS represents some 100 kg
of electronics: a dozen black equipment boxes, containing the computers, reaction wheels, Sun
and inertial sensors, power supplies and associated cabling. Before delivery for integration into the
spacecraft
TT&C Functions
Carrier tracking
o Two-way coherent communication
– Transmitter phase-locks to the received frequency
– Transmitted frequency is a specific ratio of the uplink frequency
o Easy to find and measure the frequency received on the ground
o Doppler shift provides range rate
Command reception and detection
Telemetry modulation and transmission
Ranging
o Uplink pseudo-random code is detected and retransmitted on the downlink
o Turnaround time provides range
o Ground antenna azimuth and elevation
Determines satellite angular location
Subsystem operations
o Receive commands from Command and Data Handling subsystem
o Provide health and status data to CD&H
o Perform antenna pointing
o Perform mission sequence operations per stored software sequence
o Autonomously select omni-antenna when spacecraft attitude is lost
Command System
Reconfigures satellite or subsystems in response to radio signals from the ground
Command timing
– Immediate
– Delayed
– Priority driven (ASAP)
Command Functions
Power on/off subsystems
Change subsystem operating modes
Control spacecraft guidance and attitude control
Deploy booms, antennas, solar cell arrays, protective covers
Upload computer programs
The high-gain antenna must be pointed accurately and is therefore steered using the gimbal
mechanism.
(b) Low-gain Antennas
Two smaller antennas provide lower-rate communication during emergencies and special events.
For example, they did this during launch and Mars Orbit Insertion. The data rate of these antennas
is lower because they focus the radio beam much more broadly than the high-gain antenna,
meaning less of the signal reaches Earth. The Deep Space Network on Earth, however, can "see"
the signal even when the spacecraft is not pointed at Earth. That is why these antennas are useful
for emergencies. Think of how a flashlight works: With a tightly focused beam of light you can
see farther directly ahead but not at all to the side. With a wide beam you can see all around you
but not very far. The low-gain antennas have the capability to both transmit and receive.
The signal received from the satellite is amplified in a low noise amplifier first and is then down
converted to the downlink frequency. It is then demodulated and decoded and then the original
baseband signal is obtained.
The isolation of low noise receiver from the high power transmitter is of much concern in the
design consideration of earth station. There may also be satellite/earth terminal mutual
interference effects. Other sources of interference include ground microwaves relay links, sun
transit effects and inter-modulation products generated in the transponder or earth terminal. Before
1983, the spacing between two GEO satellites was established at 40 of the equatorial arc and the
smallest earth station antenna for the simultaneous transmit-receive operations was 5m in diameter.
Now the spacing allowed between two adjacent satellites in space is 20 along the equatorial arc.
The close spacing has allowed twice as many satellites to occupy the same orbital arc.
3.1 Introduction................................................................................................32
3.1.1 ISDN Standards .....................................................................................32
3.2 ISDN Protocol Architecture .........................................................................33
3.3 (ISDN) Transmission channel .......................................................................33
3.3.1 Two Types of Digital Subscriber Loops...................................................33
3.3.2 ISDN Channels and their Applications ...................................................34
3.4 ISDN services ..............................................................................................34
3.4.1 Videotext ..............................................................................................35
3.4.2 Email ....................................................................................................36
3.4.3 Facsimile...............................................................................................36
3.5 Signalling: User Level, Network Level ..........................................................36
3.5.1 User Level .............................................................................................36
3.5.2 Network Level .......................................................................................36
3.6 Broadband ISDN ........................................................................................36
3.6.1 Interactive services ...............................................................................36
3.6.2 Distributive services..............................................................................37
32 Multimedia Communication (3361106)
ISDN Bridges: Because of its simplicity, bridging is one of the most popular ways of linking
LANs. The big problem with ISDN bridging is controlling the bridge's use of the ISDN network.
Bridges are simple to set up and use because they will forward data, such as broadcasts, by default.
Over ISDN, this means that calls will be made to send largely unnecessary data. Over a period of
time, this can prove very expensive.
ISDN Routers: Routing is a far more effective way to utilize ISDN for LAN internetworking.
This is the approach being taken by the entire internetworking vendors. Data is only sent over the
ISDN network when it is really needed. There are no unnecessary broadcast messages to transmit,
so the bandwidth is used more efficiently than with bridges and the configuration can actually be
simpler. Filters may be used to block out all unnecessary traffic.
The ISDN D channel will utilize different signaling protocols at Layer 3 and Layer 2 of the OSI
Model. Typically, at Layer 2, LAP-D (Link Access Procedure -- D Channel) is the Q.921 signaling
used, and DSS1 (Digital Subscriber Signaling System No. 1) is the Q.931 signaling that is used at
Layer 3. It is easy to remember which one is used at which layer by simply remembering that the
middle number corresponds to the layer at which it operates.
Supplementary services – provide additional functionality to the bearer services and teleservices.
1. Bearer Services: Bearer services provide the means to transfer information (voice, data and
video) between users without the network manipulating the content of that information.
The network does not need to process the information and therefore does not change the
content. Bearer services belong to the first three layers of the OSI model and are well
defined in the ISDN standard. They can be provided using circuit-switched, packet-
switched, frame-switched, or cell-switched networks.
2. Teleservices: In teleservices, the network may change or process the contents of the data.
These services correspond to layers 4-7 of the OSI model. Teleservices relay on the
facilities of the bearer services and are designed to accommodate complex user needs,
without the user having to be aware of the details of the process. Teleservices include
telephony, teletex, telefax, videotex, telex and teleconferencing. Although the ISDN
defines these services by name, they have not yet become standards.
3. Supplementary Service: Supplementary services are those services that provide additional
functionality to the bearer services and teleservices. Examples of these services are reverse
charging, call waiting, and message handling, all familiar from today's telephone company
services.
3.4.1 Videotext
Videotext is a generic term for system that provide easy to use, low cost computer based services
via communication facilities.
Three forms of videotext exists.
1. View data: it is fully interactive video text. Ex. Video calling
2. Tele text: it is broadcast or pseudo-interactive videotext service. Tele text users may
select the information to be seen, the pace at which the information is to be displayed.
3. Open channel teletext: it is totally non-nutritive one way video text.
3.4.2 Email
One of the most popular network services is electronic mail. The TCP/IP protocol that supports
electronic mail on the internet is called simple mail transfer protocol (SMTP).
- Sending a single message to one or more recipients.
- Sending message to users on text, voice, video or graphics.
- Sending message to users on networks outside the internet.
3.4.3 Facsimile:
Document exchange through facsimile system is emerging as a major application of
communication system.
A modern fax machine is faster and it does not use any rotating drum but it users the same basic
mechanics to get the job done.
- Conversational
Conversational services are those. Such as telephone calls, that support real-time exchanges.
- Messaging
Messaging services are store and forward exchanges. These services are bidirectional, meaning
that all parties in an exchange can use them at the same time.
- Retrieval
Retrieval services are those used to retrieve information from a central source, called an
information center. These services are like libraries: they must allow public access and allow
users to retrieve information on demand.
TEXT COMMUNICATION
Text communication encompasses a variety of forms and is one of the most common forms of
multimedia communication in a computer user's day-to-day activities. Text communication
includes such areas of Internet use as reading a website, reading and writing email messages and
instant messaging. Text communication is also the oldest form of multimedia communication, as
the first computers displayed text only.
IMAGE COMMUNICATION
Though images might not seem to be a form of communication in the same way that text is a form
of communication, it is a legitimate form of multimedia communication that many users enjoy
daily. Examples include browsing an online photo album, opening and viewing images attached
to an email and looking at photos that accompany stories on news websites.
AUDIO COMMUNICATION
A common form of Web-based multimedia communication is audio communication. This form
involves receiving a message through an audio format, such as listening to an online radio station
or playing a music file. If you use the Internet to stream a radio station broadcast, for example, you
are engaging in a form of audio communication. Audio communication often combines with other
forms of multimedia communication. A slideshow, for example, can feature text, images and audio
together.
VIDEO COMMUNICATION
As its name indicates, video communication is a form of multimedia communication through
video. It is common on many websites, including YouTube and the websites of television stations.
Since high-speed Internet has become common, video communication has increased as users are
able to access this form of multimedia communication. Types of video communication include
.AVI, MPEG, WMV and QuickTime files.
User Requirements
• Fast preparation and presentation of the different information types of interest, taking into
account the capabilities of available terminals and services
• Dynamic control of multimedia applications with respect to connection interactions and quality
on demand combined with user-friendly human/machine interfaces'
• Intelligent support of users taking into consideration their individual capabilities
• Standardization
Network Requirements
• High speed and changing bit rates
• Several virtual connections using the same access
• Synchronization of different information types
• Suitable standardized services and supplementary service supporting multimedia applications
There is one major uncompressed audio format, PCM, which is usually stored as a .wav on
Windows or as if on Mac OS.
WAV is a flexible file format designed to store more or less any combination of sampling
rates or bitrates.
This makes it an adequate file format for storing and archiving an original recording.
A lossless compressed format would require more processing for the same time recorded,
but would be more efficient in terms of space used.
WAV, encodes all sounds, whether they are complex sounds or absolute silence, with the same
number of bits per unit of time. E.g a file containing a minute of playing by a symphonic orchestra
would be the same size as a minute of absolute silence if they were both stored in WAV.
If the files were encoded with a lossless compressed audio format, the first file would be
marginally smaller, and the second file taking up almost no space at all. However, to encode
the files to a lossless format would take significantly more time than encoding the files to
the WAV format.
Lossless audio formats (such as the most widespread FLAC, WavPack, Monkey's Audio) provide
a compression ratio of about 2:1.
A lossy compression method is one where compressing data and then decompressing it
retrieves data that may well be different from the original, but is close enough to be useful
in some way.
Lossy compression is most commonly used to compress multimedia data (audio, video,
still images), especially in applications such as streaming media and internet telephony.
By contrast, lossless compression is required for text and data files, such as bank records,
text articles, etc.
MP3 MPEG-1 Audio Layer-3 format offers a very high rate of compression for audio files (about
a 12:1 ratio) while preserving the original level of sound quality to the ear. Because of its high
quality at small size, mp3 has exploded in popularity, and many sites offer mp3 files for download
(most are offering these files in violation of copyright). Digital audio is typically created by taking
44,100 16-bit samples per second (Hz) of the analog audio signal, this means that one second of
CD-quality sound requires 1.4 million bits (about 176K bytes) of data.
Using a knowledge of how people actually perceive sound, the developers of MP3 devised a
compression algorithm that reduces data about sound that most listeners cannot perceive. MP3 is
currently the most powerful algorithm in a series of audio encoding standards developed under the
sponsorship of the Motion Picture Experts Group (MPEG) and formalized by the International
Organization for Standardization (ISO).
making no compromises in accuracy. In contrast, lossy algorithms accept some degradation in the
image in order to achieve smaller file size.
A lossless algorithm might, for example, look for a recurring pattern in the file, and replace each
occurrence with a short abbreviation, thereby cutting the file size. In contrast, a lossy algorithm
might store color information at a lower resolution than the image itself, since the eye is not so
sensitive to changes in color of a small distance.
TIFF is, in principle, a very flexible format that can be lossless or lossy. The details of the image
storage algorithm are included as part of the file. In practice, TIFF is used almost exclusively as a
lossless image storage format that uses no compression at all. Most graphics programs that use
TIFF do not compression. Consequently, file sizes are quite big. (Sometimes a lossless
compression algorithm called LZW is used, but it is not universally supported.)
PNG is also a lossless storage format. However, in contrast with common TIFF usage, it looks for
patterns in the image that it can use to compress file size. The compression is exactly reversible,
so the image is recovered exactly.
GIF creates a table of up to 256 colors from a pool of 16 million. If the image has fewer than 256
colors, GIF can render the image exactly. When the image contains many colors, software that
creates the GIF uses any of several algorithms to approximate the colors in the image with the
limited palette of 256 colors available. Better algorithms search the image to find an optimum set
of 256 colors. Sometimes GIF uses the nearest color to represent each pixel, and sometimes it uses
"error diffusion" to adjust the color of nearby pixels to correct for the error in each pixel.
GIF achieves compression in two ways. First, it reduces the number of colors of color-rich images,
thereby reducing the number of bits needed per pixel, as just described. Second, it replaces
commonly occurring patterns (especially large areas of uniform color) with a short abbreviation:
instead of storing "white, white, white, white, white," it stores "5 white."
Thus, GIF is "lossless" only for images with 256 colors or less. For a rich, true color image, GIF
may "lose" 99.998% of the colors.
JPG is optimized for photographs and similar continuous tone images that contain many, many
colors. It can achieve astounding compression ratios even while maintaining very high image
quality. GIF compression is unkind to such images. JPG works by analyzing images and discarding
kinds of information that the eye is least likely to notice. It stores information as 24 bit color.
Important: the degree of compression of JPG is adjustable. At moderate compression levels of
photographic images, it is very difficult for the eye to discern any difference from the original,
even at extreme magnification. Compression factors of more than 20 are often quite acceptable.
Better graphics programs, such as Paint Shop Pro and Photoshop, allow you to view the image
quality and file size as a function of compression level, so that you can conveniently choose the
balance between qualities and file size.
RAW is an image output option available on some digital cameras. Though lossless, it is a factor
of three of four smaller than TIFF files of the same image. The disadvantage is that there is a
different RAW format for each manufacturer, and so you may have to use the manufacturer's
software to view the images. (Some graphics applications can read some manufacturer's RAW
formats.)
PSD, PSP, etc., are proprietary formats used by graphics programs. Photoshop's files have the
PSD extension, while Paint Shop Pro files use PSP. These are the preferred working formats as
you edit images in the software, because only the proprietary formats retain all the editing power
of the programs. These packages use layers, for example, to build complex images, and layer
information may be lost in the nonproprietary formats such as TIFF and JPG. However, be sure to
save your end result as a standard TIFF or JPG, or you may not be able to view it in a few years
when your software has changed.
Currently, GIF and JPG are the formats used for nearly all web images. PNG is supported by most
of the latest generation browsers. TIFF is not widely supported by web browsers, and should be
avoided for web use. PNG does everything GIF does, and better, so expect to see PNG replace
GIF in the future. PNG will not replace JPG, since JPG is capable of much greater compression of
photographic images, even when set for quite minimal loss of quality.
What is a codec?
A codec is used to compress and decompress a digital file. Now you probably are thinking? Why
in the world do you need to compress a file and loose quality?? Think of it this way: A normal
two-hour long movie on a Blu-ray disk would take up 20GB to 40GB of space. Such a large video
file will take too long if downloaded over the Internet. Hence, video files are compressed so that
they can be handled easily. A codec is used to do this task.
What is a container?
A container is a collection of files that stores information about the digital file; it consists of the
video and audio codecs along with other information, such as subtitles and chapters. You have
control over the type of codec you wish to choose for audio and video separately. The popular
types of containers are AVI, MP4 and MOV. A new open source container that’s gaining
popularity recently is MKV, the reason being that it can support nearly any video codec known.
MPEG-4 (.MP4)
MP4 is a video format mainly used by camcorders and cameras that is gaining popularity. The
quality of a video coded using .MP4 is very high and the file size relatively small. .MP4 standard
is becoming more popular than .FLV for online video sharing, as it compatible with both online
and mobile browsers and also supported by the new HTML5.
A decade ago, networked multimedia systems were capable of supporting mostly devices like
Personal computers and/or a small LAN set-up. However, with the advent of modern day wireless
technology, devices such as mobile-technology enabled laptops, handheld devices such as palm-
tops, PDAs, etc also fall under active interactive devices. This means that in the volume of traffic
ranges from simple short media clips, images, and text messages to long duration media data,
which is a multi-fold increase. Further, when compared to service architectures conceived from
late 80’s to mid-90’s, modern day services need to account radically different issues in addition to
the existing issues in the design of a DMMS architecture. To appreciate this point, one can quickly
imagine the issues related to a mobile technology playing crucial roles such as ensuring continuous
network connectivity, graceful degradation of service quality under unavoidable circumstances,
replication of media data 1 and maintaining consistency for editable data, if any, to quote a few.
In addition to such media-rendering service facilities, the purview of modern day DMMS extends
to entertainment in the form of games and casinos on networks.
Interactivity: Requires duplex communication between the user and the system and allows
each user to control the information
Technology integration: Integrates information, communication and computing system
to from a unified digital processing environment.
Multimedia Integration: Accommodates discrete data as well as continuous data in an
integrated environment.
Real-time performance: Requires the storage system processing system and transmission
systems to have real-time performance. Hence huge storage volume, high network
transmission rate and high CPU processing rate are required.
4.9.1 ITV
Two way cable TV system that enables the viewer to issue commands and give feedback
information through an electronic device called a setup box. The viewer can select which program
or movie to watch, at what time, and can place orders in response to commercials. New setup boxes
also allow access to email and e-commerce applications via internet.
Interactive TV (ITV or iTV) is an approach to television advertising and programming that creates
the opportunity for viewers to communicate with advertisers and programming executives by
responding to a call to action.
Interactive television refers to technology where traditional TV services are combined with data
services. The major aim of interactive TV is to provide an engaging experience to the viewer.
Interactive TV increases engagement levels by allowing user participation and feedback. It can
also become part of a connected living room and be controlled using devices other than the remote
control, like mobile phones and tablets.
The return path is the channel that is used by viewers to send information back to the broadcaster.
This path can be established using a cable, telephone lines or any data communications technology.
The most commonly used return path is a broadband IP connection.
4.9.2 VOD
VoD and VoR are most commonly used services by network users. VoD is certainly an attractive
technology in rendering digital video libraries, distance learning, electronic commerce, etc, as (i)
users can request the service any time and from anywhere and, (ii) service allows users to have a
complete control of the presentation. Despite continuous research efforts in rendering quality VoD
service, the requirement on network resources, such as server bandwidth, cache space, number of
copies, etc., are still considered overwhelming owing to an exponential growth in client population.
When such fully interactive service demands are met we say that the service is of type True-VoD.
While small movie clips rendering, learning and video-conferencing kind of applications are
almost now delivering a high-quality service, for long-duration movie viewing with a presentation
control still seems to exhibit annoying experiences for users.
When users reserve for a movie presentation in advance, VoD manifests in the form of VoR. Under
VoR service, system is shown to utilize resources in an optimal manner as user viewing times are
known in advance. VoD and VoR are completely orthogonal in their service style. Perhaps VoR
is better suited for pay-per-view by SPs for digital TV subscribers. Another type of VoD service
is called as Near VoD services (NVoD), and it distributes videos over multicast or broadcast
channels to serve the requests which demand the same videos and arrive close in time. These
technologies have been successful to provide video services in local area networks, say in hotels.
MPEG-1, a standard for storage and retrieval of moving pictures and audio on
storage media (approved November 1992); products such as Video CD and
MP3 are based on it
MPEG-2, a standard for digital television (approved November 1994); Digital
Television set top boxes and DVD are based on it
MPEG-4 version 1 and 2, a standard for multimedia applications (approved
October 1998 and December 1999, respectively ), for the fixed and mobile web
MPEG-7 a content representation standard for multimedia information search,
filtering, management and processing
1. DSM: develops standards for interfaces between Digital Storage Media (DSM), servers
and clients for the purpose of managing DSM resources and controlling the delivery of MPEG bit
streams and associated data.
2. Delivery: develops standards for interfaces between MPEG-4 applications and peers or
broadcast media, for the purpose of managing transport resources.
3. Systems: develops standards for the coding of the combination of individually coded audio,
moving images and related information so that the combination can be used by any application.
4. Video: develops standards for coded representation of moving pictures of natural origin.
5. Audio: develops standards for coded representation of audio of natural origin.
6. SNHC: Synthetic- Natural Hybrid Coding: develops standards for coded representation of
audio and moving pictures of natural and synthetic origin. SNHC concentrates on the coding of
synthetic data.
7. Test: develops methods for and executes subjective evaluation tests of the quality of
coded audio and moving pictures, both individually and combined, to test the quality of moving
pictures and audio produced by MPEG standards
8. Implementation: evaluates coding techniques so as to provide guidelines to other groups
upon realistic boundaries of implementation parameters.
9. Liaison: handles relations with bodies external to MPEG.
10. HoD: (Heads of Delegations): acts in advisory capacity on matters of general nature.
appropriate audio and video decoders that produce the intended sequences of PCM samples
representing audio and video information.
MPEG-1, formally known as ISO/IEC 11172, is a standard in 5 parts. The first three parts are
Systems, Video and Audio, in that order. Two more parts complete the suite of MPEG-1 standards:
Conformance Testing, which specifies the methodology for verifying claims of conformance to
the standard by manufacturers of equipment and producers of bit streams, and Software
Simulation, a full C-language implementation of the MPEG-1 standard (encoder and decoder).
Manifold have been the implementations of the MPEG-1 standard: from software implementations
running on a consumer-grade PC of today in real time, to single boards for PCs, to the so-called
Video CD etc. The last product has become a market success in some countries: in China alone
millions of Video CD decoders have already been sold. MPEG-1 content is used for such services
as DAB (Digital Audio Broadcasting) and is the standard format on the Internet for quality video.
The MPEG-4 representation of video represents a departure from the traditional digital signal
processing approach. This approach can be labelled as coding oriented towards compression based
on Fourier transforms or similar mathematical operations. The new approach is a step towards
using more "intelligent" image analysis and understanding in the whole process.
The following figure gives a general reference model for an MPEG-4 receiver. This is an extension
of the MHEG-enabled MPEG-2 model with the capability for a user to present the scene depending
on the view/hear point.
A thorough analysis of requirements has led the group to target a standard with the following
characteristics
1. Is natural, synthetic or both;
2. Is real time and non-real time;
3. Supports different functionalities responding to user's needs;
4. Flows to and from different sources simultaneously;
5. Does not require the user to bother with the specifics of the communication channel, but
uses a technology that is aware of it.
6. Gives users the possibility to interact with the different information elements;
7. Let’s the user present the result of his interaction with content in the way suiting his needs.
In fact, upon first testing, it appeared to be step backwards, producing significantly more visual
distortion at the same file sizes as classic JPEG. Upon further testing, it appears to be roughly
comparable. JPEG2000 producing a slightly different kind of distortion, characterized by
significant blurring.
JPEG 2000 is an image coding system that uses state-of-the-art compression techniques based on
wavelet technology. Its architecture lends itself to a wide range of uses from portable digital
cameras through to advanced pre-press, medical imaging and other key sectors.
JPEG 2000 refers to all parts of the standard. Below is the list of current parts that make up the
complete JPEG 2000 suite of standards.
At the core of the JPEG 2000 structure is a new wavelet based compression methodology that
provides for a number of benefits over the Discrete Cosine Transformation (DCT) compression
method, which was used in the JPEG format. The DCT compresses an image into 8x8 blocks and
places them consecutively in the file. In this compression process, the blocks are compressed
individually, without reference to the adjoining blocks. This results in “blockiness” associated
with compressed JPEG files. With high levels of compression, only the most important
information is used to convey the essentials of the image. However, much of the subtlety that
makes for a pleasing, continuous image is lost.
In contrast, wavelet compression converts the image into a series of wavelets that can be stored
more efficiently than pixel blocks. Although wavelets also have rough edges, they are able to
render pictures better by eliminating the “blockiness” that is a common feature of DCT
compression. Not only does this make for smoother color toning and clearer edges where there
are sharp changes of color, it also gives smaller file sizes than a JPEG image with the same level
of compression.
This wavelet compression is accomplished through the use of the JPEG 2000 encoder, which is
pictured in Figure 4. This is similar to every other transform based coding scheme. The transform
is first applied on the source image data. The transform coefficients are then quantized and entropy
coded, before forming the output. The decoder is just the reverse of the encoder. Unlike other
coding schemes, JPEG 2000 can be both lossy and lossless. This depends on the wavelet transform
and the quantization applied.
Standard Approval:
The “Alternative Approval Process” (AAP) is a fast-track approval procedure that was developed
to allow standards to be brought to market in the timeframe that industry now demands. This
dramatic overhaul of standards-making by streamlining approval procedures was implemented in
2001 and is estimated to have cut the time involved in this critical aspect of the standardization
process by 80 to 90 per cent. This means that an average standard which took around four years to
approve and publish until the mid-nineties, and two years until 1997, can now be approved in an
average of two months, or as little as five weeks. The vast majority of standards are approved in
this way. Only those that have regulatory implications are not, they use the is called the traditional
approval process (TAP). Besides streamlining the underlying procedures involved in the approval
process, an important contributory factor to the use of AAP is electronic document handling.
5.1 Introduction................................................................................................57
5.2 Multimedia Networking: Goals and Challenges.............................................57
The real-time challenge ...............................................................................57
5.3 Multimedia over Internet ............................................................................57
5.4 PROTOCOLS FOR MULTIMEDIA ..................................................................58
5.4.1 RSVP --- Resource Reservation Protocol ................................................58
5.4.2 How does RSVP work? ...........................................................................58
5.4.3 RTP --- Real-time Transport Protocol .....................................................59
5.4.4 RTSP---Real-Time Streaming Protocol ...................................................59
5.5 AOIP AND VOIP ...........................................................................................60
5.5.1 Audio over IP.........................................................................................60
5.5.2 VOICE OVER IP .....................................................................................60
5.6 DSL LINES AND ITS TYPES..........................................................................61
5.6.1 Asymmetric DSL....................................................................................62
5.6.2 Symmetric DSL .....................................................................................62
5.6.3 ADSL/VDSL ...........................................................................................62
5.7 Multimedia across Wireless .........................................................................64
5.7.1 Speech transmission in GSM .................................................................64
5.7.2 Video across GSM ..................................................................................65
5.7.3 Mobile ATM ...........................................................................................65
5.7.4 Mobile IP ...............................................................................................66
5.8 Wireless multimedia delivery ......................................................................67
5.1 Introduction 57
5.1 Introduction
The future Integrated Services Internet will provide means to transmit real-time multimedia data
across networks. RSVP, RTP, RTCP and RTSP are the foundation of real-time services. This
paper is a detailed survey of the four related protocols.
Computer networks were designed to connect computers on different locations so that they can
share data and communicate. In the old days, most of the data carried on networks was textual
data. Today, with the rise of multimedia and network technologies, multimedia has become an
indispensable feature on the Internet. Animation, voice and video clips become more and more
popular on the Internet. Multimedia networking products like Internet telephony, Internet TV,
video conferencing have appeared on the market. In the future, people would enjoy other
multimedia products in distance learning, distributed simulation, distributed work groups and
other areas.
For networkers, multimedia networking is to build the hardware and software infrastructure and
application tools to support multimedia transport on networks so that users can communicate in
multimedia. Multimedia networking will greatly boost the use the of computer as a
communication tool. We believe someday multimedia networks will replace telephone,
television and other inventions that had dramatically changed our life.
However, multimedia networking is not a trivial task. We can expect at least three difficulties.
First, compared with traditional textual applications, multimedia applications usually require
much higher bandwidth. A typical piece of 25 second 320x240 QuickTime movie could take
2.3MB, which is equivalent to about 1000 screens of textual data. This is unimaginable in the old
days when only textual data is transmitted on the net.
There are other ways to transmit multimedia data, like dedicated links, cables and ATM.
However, the idea of running multimedia over Internet is extremely attractive.
Dedicated links and cables are not practical because they require special installation and new
software. Without an existing technology like LAN,WAN, the software development will be
extremely expensive. ATM was said to be the ultimate solution for multimedia because it
supports very high bandwidth, is connection-oriented and can tailor different level of quality of
service to different type of applications. But at this moment, very few users have ATM networks
reaching their organization, even fewer have ATM connections to their desktops.
On the other hand, the Internet is growing exponentially. The well established LAN and WAN
technologies based on IP protocol suite connect bigger and bigger networks all over the world to
the Internet. In fact, Internet has become the platform of most networking activities. This is the
primary reason to develop multimedia protocols over Internet. Another benefit of running
multimedia over IP is that users can have integrated data and multimedia service over one single
network, without investing on another network hardware and building the interface between two
networks.
RSVP is the network control protocol that allows data receiver to request a special end-to-end
quality of service for its data flows. Real-time applications use RSVP to reserve necessary
resources at routers along the transmission paths so that the requested bandwidth can be available
when the transmission actually takes place. RSVP is a main component of the future Integrated
Services Internet which can provide both best-effort and real-time service
RSVP is used to set up reservations for network resources. When an application in a host (the
data stream receiver) requests a specific quality of service (QoS) for its data stream, it uses
RSVP to deliver its request to routers along the data stream paths. RSVP is responsible for the
negotiation of connection parameters with these routers. If the reservation is setup, RSVP is also
responsible for maintaining router and host states to provide the requested service.
Each node capable of resource reservation has several local procedures for reservation setup and
enforcement (see Figure 1). Policy control determines whether the user has administrative
permission to make the reservation. In the future, authentication, access control and accounting
for reservation will also be implemented by policy control. Admission control keeps track of the
system resources and determines whether the node has sufficient resources to supply the
requested QoS.
The RSVP daemon checks with both procedures. If either check fails, the RSVP program returns
an error notification to the application that originated the request. If both checks succeed, the
RSVP daemon sets parameters in the packet classifier and packet scheduler to obtain the
requested QoS. The packet classifier determines the QoS class for each packet and the packet
scheduler orders packet transmission to achieve the promised QoS for each stream.
RSVP daemon also communicate with the routing process to determine the path to send its
reservation requests and to handle changing memberships and routes.
This reservation procedure is repeated at routers along the reverse data stream path until the
reservation merges with another reservation for the same source stream.
Reservations are implemented through two types of RSVP messages: PATH and RESV. The
PATH messages are sent periodically from the sender to the multicast address. A PATH message
contains flow spec to describe sender template (data format, source address, source port) and
traffic characteristics. This information is used by receivers to find the reverse path to the sender
and to determine what resources should be reserved. Receivers must join the multicast group in
order to receive PATH messages.
Realtime transport protocol (RTP) is an IP-based protocol providing support for the transport of
real-time data such as video and audio streams. The services provided by RTP include time
reconstruction, loss detection, security and content identification. RTP is primarily designed for
multicast of real-time data, but it can be also used in unicast. It can be used for one-way transport
such as video-on-demand as well as interactive services such as Internet telephony.
Instead of storing large multimedia files and playing back, multimedia data is usually sent across
the network in streams. Streaming breaks data into packets with size suitable for transmission
between the servers and clients. The real-time data flows through the transmission,
decompressing and playing back pipeline just like a water stream. A client can play the first
packet, decompress the second, while receiving the third. Thus the user can start enjoying the
multimedia without waiting to the end of transmission.
Audio over IP (AoIP) is the distribution of digital audio across an IP network such as the
Internet. It is being used increasingly to provide high-quality audio feeds over long distances.
The application is also known as audio contribution over IP (ACIP) in reference to the
programming contributions made by field reporters and remote events. Audio quality and latency
are key issues for contribution links.
In the past, these links have made use of ISDN services but these are becoming increasingly
difficult or expensive to obtain in some parts of Europe and are being phased out in others. Many
proprietary systems came into existence for transporting high-quality audio over IP based on
TCP, UDP or RTP. An interoperable standard for audio over IP using RTP now exists.
Voice over Internet Protocol (Voice over IP, VoIP and IP telephony) is a methodology and group
of technologies for the delivery of voice communications and multimedia sessions over Internet
Protocol (IP) networks, such as the Internet. The terms Internet telephony, broadband telephony,
and broadband phone service specifically refer to the provisioning of communications services
(voice, fax, SMS, voice-messaging) over the public Internet, rather than via the public switched
telephone network (PSTN).
FIG. Understanding the terms is a first step toward learning the potential of this technology
*VoIP refers to a way to carry phone calls over an IP data network, whether on the Internet or
your own internal network. A primary attraction of VoIP is its ability to help reduce expenses
because telephone calls travel over the data network rather than the phone company's network.
*IP telephony encompasses the full suite of VoIP enabled services including the interconnection
of phones for communications; related services such as billing and dialing plans; and basic
features such as conferencing, transfer, forward, and hold. These services might previously have
been provided by a PBX.
Asymmetric types of DSL connections provide more network bandwidth for downloading (from
the Internet service provider down to the subscriber's computer) than for uploading in the other
direction). By reducing the amount of bandwidth available upstream, service providers are able
to offer relatively more bandwidth downstream.
Asymmetric DSL technology is popular in residential DSL services as home Internet users
predominately use downstream bandwidth. Typical asymmetric DSL services support 5 Mbps for
downloads and 1 Mbps for uploads.
Symmetric types of DSL connections provide equal bandwidth for both uploads and downloads.
Symmetric DSL technology is popular for business-class DSL services as companies often have
greater needs for transferring data. Typical symmetric DSL connections support 1.5 Mbps for
downloads and uploads.
5.6.3 ADSL/VDSL
DSL delivers broadband to more people today than any other technology. Roughly two-thirds of
all broadband subscribers are DSL subscribers, and there are more new DSL subscribers each
month than new subscribers for all other broadband access technologies combined.
DSL, which stands for Digital Subscriber Line, is a technology that delivers broadband speeds
over distances of miles or kilometers via copper wiring. DSL was originally delivered over the
same wires that are used to provide traditional voice telephony services. These wires run from a
telephone company’s Central Office (CO), the location where voice switching and other
traditional telephony functions are performed, to a subscriber’s home or business. Increasingly,
DSL is delivered from a device situated closer to the subscriber’s home or business that is
connected to a CO via an optical fiber link, and then to the subscriber’s premises via copper
wires. In all cases, however, DSL delivers broadband over the copper connections that exist
already in almost every residence and business in the developing and developed worlds.
This architecture is depicted in the figure below. At the CO, or at a remote location typically
connected to the CO via fiber optics, there is a DSL Access Multiplexer (DSLAM) that sends
and receives broadband data to many subscribers via DSL technology. At each subscriber’s
location, there is a modem (modulator-demodulator) that communicates with the DSLAM to
send and receive that subscriber’s broadband data to and from the Internet and other networks. A
DSLAM communicates with many individual subscriber modems. Each subscriber’s modem is
dedicated to that subscriber’s broadband connection.
Voice services utilize only a small fraction of the total information carrying capacity of copper
connections. In an analogous manner to Ethernet technology, which can transmit a Gigabit (more
than one billion bits) per second of data over copper connections or the equivalent of tens of
thousands of simultaneous phone conversations, DSL exploits the information carrying capacity
of copper lines to deliver broadband services over long distances.
To engineers, “DSL” means a set of formal standards for communicating broadband signals over
copper lines. It also means equipment that complies with those standards. The principal DSL
standards are published by the International Telecommunications Union (ITU), a standards body
based in Geneva, Switzerland.
DSL standards have evolved significantly since the first DSL standards were established in the
early 1990’s. The DSL standards have evolved to support higher data rates, to take advantage of
advances in equipment technologies, and to ensure that DSL can coexist on copper lines with
other communications standards such as Integrated Services Digital Network (ISDN), an early
digital voice and data service that is still in use in many countries. The table below lists some of
the principal DSL standards in use today.
The speech transmission in GSM involved modulation and demodulation process basically. The
GSM digital speech compression process works by grouping the digital audio signals into
20msec speech frames. These speech frames are analyzed and characterized (e.g. volume, pitch)
by the speech coder. The speech coder removes redundancy in the digital signal (such as silence
periods) and characterizes digital patterns that can be made by the human voice using code book
tables. The code book table codes are transmitted instead of the original digitized audio signal.
This results in the transmission of a 13 kbps compressed digital audio instead of the 64 kbps
digitized audio signal.
There are six level encoding and decoding process includes, Speech coding, channel coding,
interleave, burst formatting, ciphering and modulation. This figure shows the basic speech data
compression process used for the GSM speech coder. This diagram shows that the analog voice
signal is sampled 8,000 times each second and digitized into a 64 kbps digital signal. The
digitized signal is grouped into 20msec speech frames. The speech frames are analyzed and
compressed into a new 13 kbps digital signal.
The item, which defines the design functions of control/signaling, IS called mobile ATM. In
WATM networks, a mobile end-user establishes a VC to communicate with another user, either a
mobile or an ATM end-user. When the mobile end-user moves from one AP to another AP,
proper handover is required. To minimize the interruption to cell transport, an efficient switching
of the active VC from the old data path to new data path is needed. Also, the switching should be
fast enough to make the new VCs available to the mobile users. During the handover, an old path
is released and a new path is then reestablished. In this case, no cell is lost, and cell sequence is
preserved. Cell buffering consists of uplink buffering and downlink buffering. If VC is broken
when the mobile user is sending cells to APs, unlinking buffering is required. The mobile user
will buffer all the outgoing cells. When the connection is up, it sends out all the buffered cells so
that no cells are lost unless the buffer overflows. Downlink buffering is performed by APs to
preserve the downlink cells for sudden link interruption congestion or retransmissions. It may
also occur when the hand over is executed.
When a connection is established between one mobile ATM endpoint and another ATM end
point, the mobile ATM endpoint needs to be located. There are two basic location management
schemes: the mobile scheme and the location register scheme. In the mobile scheme, when a
user, moves, the reachability update information only propagates to the nodes in a limited region.
When a call is originated by switching in this region, it can use the location information to
establish the connection directly.
5.7.4 Mobile IP
The evolution of mobile networking will differ from that of telephony in some important
respects. The end points of a telephone connection are typically human. Computer applications
are likely to involve interconnections between machines without human interruption. Obvious
examples of this are mobile computing devices on airplanes, ships and automobiles. Mobile
networking may well also come to depend on position-finding devices, such as a satellite global,
positioning system, to work in tandem with wireless access to the Internet. There are still some
technical obstacles that must be overcome before mobile networking can become widespread.
The most fundamental is the IP, the protocol that connects the networks of today's Internet, and
routes packets to their destinations according to IP addresses. These addresses are associated
with a fixed network location much as a non-mobile phone number is associated with a physical
jack in a wall. When the packet's destination is a mobile node, this means that each new point of
attachment made by the node is associated with a new network number and hence a new IP
address. Mobile IP is a standard protocol that builds on IP by making mobility transparent to
applications and higher-level protocol like TCP.
Mobile IP discovery does not modify the original fields of existing router advertisements, but
simply extends them to associated mobility functions. When the router advertisements also
contain the needed care-of address, they are known as agent advertisements, which are the
procedures by which a mobile agent becomes known to the mobile node. Home agents and
foreign agents typically broadcast agent advertisements at regular intervals, for example, once a
second or once every few seconds. An agent advertisement performs the following functions:
• Allows for the data detection of mobile agents
• Lists one or more available care-of addresses
• Informs the mobile node about special features provided by foreign agents, for example,
alternative encapsulation techniques
• Let’s mobile nodes determine the network number and status of their link to the Internet
• Let’s the mobile node know whether the agent is a home agent, a foreign agent or both,
and therefore whether it is on its home network or a foreign network
Achieving such consensus will accelerate the market for multimedia content and services. For
example, having a common technology framework for wireless multimedia delivery will reduce
the number of multimedia platforms that content providers will have to support, hastening their
time to market with new content and services and stepping up the pace at which their content can
reach a broad, far-flung user audience.
New mobile services could include the delivery of news, weather, stock and sports updates to
mobile users. In addition, traveling parents could receive clips of a child’s soccer game or
performance in the school play. Geographic location services could be combined with dating
services, whereby handheld users could receive a multimedia profile of a dating service
candidate who lives in the geographic ballpark of the user’s location. Children in Japan are
already using cell phones to send animated multimedia greetings to one another, and interactive
games that could be streamed among participating users across wireless networks are in
development in companies across the globe. With new streaming wireless services, traveling
parents could, for example, receive clips of a child’s soccer game or performance in the school
play.
RTFD Version 1.0 explicitly addresses the streaming multimedia (SMM) application, which
includes both on-demand and live streaming using voice and video as the primary media types.
The components of an SMM system include the following:
Instructions:
1. Attempt all questions.
2. Make Suitable assumptions wherever necessary.
3. Figures to the right indicate full marks.
4. Use of programmable & Communication aids are strictly prohibited.
5. Use of only simple calculator is permitted in Mathematics.
6. English version is authentic.
Q.1 Answer any seven out of ten. દશમાંથી કોઇપણ સાતના જવાબ આપો. 14
1. Write speech frequency range of human.
૧. માણસની સ્પી ફ્ર�કવન્સ ર� ન્ લખો.
2. Write audio frequency range which human can listen.
૨. માણસની સાંભળવાની ફ્ર�કવન્સ ર� ન્ લખો.
3. Write full forms of these:MPEG, DTH
૩. આના �લ ફોમર લખો: MPEG, DTH
4. Write full forms of these:DVB, VoIP
૪. આના �લ ફોમર લખો: DVB, VoIP
5. Write different types of switching systems.
૫. સ્વીચ� સીસ્ટ્મ પ્રક લખો.
6. Write at least four digital audio and image file formats.
૬. કોઇપણ ચાર ડ��ટલ ઓડ�યો અને ઇમેજ ફોરમેટના નામ લખો.
7. Draw block diagram of elements of multimedia systems used in person-to-
person communication.
8. Draw block diagram of elements of multimedia systems used in person-to-
machine communication.
9. Find bandwidth requirement in Kilo bits per seconds for Telephone quality
audio i.e. 8 KHz sampling rate, 8 bit quantization and 1 channel mono.
10. Write four advanced features of JPEG 2000.
૧૦. JPEG 2000ના ચાર એડવાન્ ફ�ચસર લખો.
1/3
(બ) આની વ્યાખ્ આપો: Erlang, Tandem Exchange, GOS ૦૩
(c) Draw block diagram of Earth station for satellite system and explain in brief. 04
(ક) સેટ�લાઇટ અથર સીસ્ટમન બ્લો ડાયેગ્ર દોર� �ુંકમાં સમ�વો. ૦૪
OR
(c) Explain the operations of standby mode of centralized stored program 04
control.
(ક) સેન્ટ્રલા સ્ટોર પ્રોગ ક્ન્ટ્ર standby mode �ુંકમાં સમ�વો. ૦૪
(d) Compare single stage and two stage networks. 04
(ડ) સ�ગલ સ્ટ� અને �ુ સ્ટ� નેટવકર ્ન સરખામણી કરો. ૦૪
OR
(d) Explain in brief the user requirements of multimedia communication. 04
(ડ) મલ્ટ�મીડ�ય કોમ્�ુનીક�શનન �ુઝર ર�ક્વાયરમેન �ુંકમાં સમ�વો. ૦૪
Q.3 (a) Define these: busy hour and busy hour call attempts. 03
પ�. 3 (અ) આની વ્યાખ્ આપો: busy hour and busy hour call attempts ૦૩
OR
(a) Explain in brief the network requirements of multimedia communication. 03
(અ) મલ્ટ�મીડ�ય કોમ્�ુનીક�શનન નેટવકર ર�ક્વાયરમેન �ુંકમાં સમ�વો. ૦૩
(b) State Kepler’s three laws of satellite motion in Orbits 03
(બ) ઓરબીટમાં સેટ�લાઇટ મોશનના ક�પલરના ત્ િનયમો સમ�વો. ૦૩
OR
(b) Draw a typical telephone traffic pattern of a working day of an urban 03
exchange.
(બ) શહ�ર� એ�ચ�જની કામના �દવસની ટ્રાફ પેટનર દોરો. ૦૩
(c) Explain transmission channels of ISDN in brief. 04
(ક) ISDNની ટ્રાન્શમ ચેનલો �ુંકમાં સમ�વો. ૦૪
OR
(c) Explain the subscriber loop system using cable hierarchy. 04
(ક) ક�બલ હાઇરારચીથી સબસ્ક્રા �ુપ સીસ્ટ સમ�વો. ૦૪
(d) Explain broadband ISDN in brief. 04
(ડ) બ્રોડબે ISDN �ુંકમાં સમ�વો. ૦૪
OR
(d) Draw block diagram of stored program control exchange. 04
(ડ) સ્ટોર પ્રોગ ક્ન્ટ એ�ચ�જનો બ્લો ડાયેગ્ર દોરો. ૦૪
2/3
Q.5 (a) Explain main features of Distributed multimedia System. 07
પ�. ૫ (અ) ડ�સ્ટ્ર�બ્� મલ્ટ�મીડ�ય સીસ્ટ્મ મેઇન ફ�ચસર સમ�વો. ૦૭
(b) Draw and explain digital internet delivery using DTH. 07
(બ) DTHના ઉપયોગથી ડ��ટલ ઇન્ટરને ડ�લીવર� સમ�વો. ૦૭
(c) Draw block diagram of digital audio signal processing system . 03
(ક) ડ��ટલ ઓડ�યો સીગ્ન પ્રોસેસ સીસ્ટમન બ્લો ડાયેગ્ર દોરો ૦૩
(d) Find bandwidth requirement in Mega bits per seconds for an CD quality 03
audio i.e. 44.1 KHz sampling rate, 16 bit quantization and 2 channel stereo
audio reception.
(ડ) આની બેન્ડવીડ ર�ક્વાયરમ� મેગા બીટસ/સ�કડમાં શોધો: CD quality audio i.e. 44.1 ૦૩
KHz sampling rate, 16 bit quantization and 2 channel stereo audio reception
************
3/3
Seat No.: ________ Enrolment No.______________
Q.1 Answer any seven out of ten. દશમાંથી કોઇપણ સાતના જવાબ આપો. 14
1. Draw two signaling tone waveforms used for telephony.
૧. ટ� �લફોની માટ� ઉપયોગ થતા બે સંક�ત ટોનના તરં ગ દોરો.
2. Define term : a) CCR b) BHCA
૨. વ્યાખ્યા આપો: અ) CCR બ) BHCA
3. Draw a typical telephone traffic pattern of a working day of an urban
exchange.
૩. શહ�ર� એક્સચેન્જમા એક કામ �દવસ એક લાક્ષ�ણક ટ� લીફોન ટ્રા�ફક પેટનર્
દોરો.
4. Compare two parameters of LEO and GEO.
૪. LEO અને GEO ના બે પ�રમાણો સરખાવો.
5. State kepler’s first law of orbital motion of satellite with diagram.
૫. ઉપગ્રહની ભ્રમણ ગિતનો ક�પ્લર નો પ્રથમ લૉ ર� ખા�ૃિત સાથે કહો.
6. List transmission channels in ISDN with their data rate.
૬. ISDN માં પ્રસારણ ચેનલો તેમની મા�હતી દર સાથે યાદ� આપો.
7. List out four ISDN services.
૭. ચાર ISDN સેવાઓની યાદ� આપો.
8. List two audio and image file formats.
૮. બે ઓ�ડયો અને ઇમેજ ફાઇલ ફોમ�ટ્સ ને યાદ� આપો.
9. Write four main features of Distributed multimedia system.
૯. િવતરણ મલ્ટ�મી�ડયા િસસ્ટમના ચાર �ુખ્ય લક્ષણો લખો.
10. Write full form of MPEG and JPEG.
૧૦. MPEG અને JPEG �ું � ૂ� નામ લખો.
Q.2 (a) Draw the block diagram of satellite space craft system. 03
પ્ર�. ર (અ) સેટ�લાઇટ સ્પેસ યાન િસસ્ટમનો બ્લોક ડાયાગ્રામ દોરો. ૦૩
OR
(a) Explain the indoor unit of direct broadcast satellite . 03
(અ) સીધા પ્રસારણ ઉપગ્રહ ની ઇન્ડોર એકમ સમ�વો. ૦૩
1/3
(b) Draw block diagram of MPEG-1 audio /video decoder. 03
(બ) MPEG -1 ઓ�ડયો /િવ�ડયો ડ�કોડર નો બ્લોક ડાયાગ્રામ દોરો. ૦૩
OR
(b) Describe the digital audio signal processing with block diagram. 03
(બ) બ્લોક ડાયાગ્રામ સાથે �ડ�જટલ ઓ�ડયો સંક�ત પ્ર�ક્રયા વણર્વો. ૦૩
(c) Describe the block diagram of earth station. 04
(ક) અથર્ સ્ટ�શનના બ્લોક ડાયાગ્રામ�ુ ં વણર્ન કરો. ૦૪
OR
(c) Describe Geosynchronous orbit in satellite communication. 04
(ક) સેટ�લાઇટ કોમ્�ુિનક�શન માં �યો-િસ�ક્રનસ ભ્રમણકક્ષા�ુ ં વણર્ન કરો. ૦૪
(d) A subscriber makes three phone calls during an hour. The time durations of 04
these calls are 3 minutes, 4 minutes and 2 minutes. Find subscriber traffic in
Erlang, Call Minutes (CM) and Centum Call Seconds (CCS).
(ડ) એક ગ્રાહકના કલાક દરિમયાન ત્રણ ફોન કોલ્સ કર� છે . આ કોલ્સ ના સમય 3 ૦૪
િમિનટ, 4 િમિનટ અને 2 િમિનટ છે . ગ્રાહકના ટ્રા�ફક અરલેન્ગ, કૉલ િમિનટ
(CM) અને Centum કૉલ સેકન્ડ્સ (સીસીએસ) માં શોધો.
OR
(d) Compare In channel signaling with common channel signaling. 04
(ડ) સામાન્ય ચેનલ સંક�ત સાથે સંક�ત ઇન ચેનલ સરખામણી કરો. ૦૪
Q.3 (a) Draw ISDN protocol architecture showing bearer and tele-services. 03
પ્ર�. 3 (અ) બેરર અને ટ� લી સેવાઓ દશાર્ વતા ISDN પ્રોટોકોલ આક�ટ� ક્ચર દોરો. ૦૩
OR
(a) Describe E-mail service in ISDN. 03
(અ) ISDN માં ઇ મેલ સેવા�ુ ં વણર્ન કરો. ૦૩
(b) Describe the block diagram of distributed multimedia system (DMS). 03
(બ) િવતરણ મલ્ટ�મી�ડયા િસસ્ટમનો બ્લોક ડાયાગ્રામ વણર્વો. ૦૩
OR
(b) Explain the general protocol stack of H-series audio visual communication. 03
(બ) H-શ્રેણી ઓ�ડયો િવઝ�ુઅલ સંચારના સામાન્ય પ્રોટોકોલ સ્ટ� ક સમ�વો. ૦૩
(c) Explain VOD distributed multimedia application. 04
(ક) VOD િવત�રત મલ્ટ�મી�ડયા એ�પ્લક�શન સમ�વો. ૦૪
OR
(c) Describe the various digital media used for multimedia communication. 04
(ક) મલ્ટ�મી�ડયા સંચાર માટ� ઉપયોગ થતા િવિવધ �ડ�જટલ મી�ડયા વણર્વો. ૦૪
(d) Explain digital facsimile service in ISDN. 04
2/3
(a) Describe video transmission across IP networks. 03
(અ) IP નેટવકર્ દ્વારા િવ�ડયો પ્રસારણ વણર્ન કરો. ૦૩
(b) Explain the mobile video encoder across GSM. 04
(બ) GSM દ્વારા મોબાઇલ િવ�ડઓ એન્કોડર સમ�વો. ૦૪
OR
(b) Explain wireless multimedia network system. 04
(બ) વાયરલેસ મલ્ટ�મી�ડયા નેટવકર્ િસસ્ટમ સમ�વો. ૦૪
(c) Explain the operations of synchronous duplex mode and load sharing mode 07
of stored program control.
(ક) સંગ્ર�હત પ્રોગ્રામ િનયંત્રણના િસ�ક્રોનસ �દ્વ�ુ�ણત મોડ અને લોડ શે�ર�ગ મોડ ની ૦૭
કામગીર� સમ�વો.
Q.5 (a) Explain the subscriber loop system using cable hierarchy. 04
પ્ર�. ૫ (અ) ક�બલ અિધશ્રે�ણનો ઉપયોગ કર�ને ગ્રાહકના � ૂપ િસસ્ટમ સમ�વો. ૦૪
(b) Describe the data transmission using MPEG-2 and DVB. 04
(બ) MPEG -2 અને ડ�વીબી મદદથી મા�હતી પ્રસારણ�ુ ં વણર્ન કરો. ૦૪
(c) Draw satellite based digital TV broadcast system. 03
(ક) સેટ�લાઇટ આધા�રત �ડ�જટલ ટ�વી પ્રસારણ િસસ્ટમ દોરો. ૦૩
(d) Draw hierarchical structure of switching system. 03
(ડ) �સ્વચ�ગ િસસ્ટમ�ુ સ્તરવા�ં માળ�ુ ં દોરો. ૦૩
************
3/3
Seat No.: ________ Enrolment No.______________
Q.1 Answer any seven out of ten. દશમાંથી કોઇપણ સાતના જવાબ આપો. 14
1. Which are different signaling provided by switching system.
૧. સ્વીચ� સીસ્ટમ કયા કયા � ુદા � ુદા િસગ્નલ આપે?
2. Differentiate GOS and Blocking Probability.
૨. �ઓએસ અને બ્લોક��્ પ્રોબેબીલીટ� વચ્ચે �ભ�તા સ.
3. State application of MEO.
૩. MEO ની ઊપયૉગીતા જણાવો.
4. Which antennas are used for Spacecraft?
૪. Spacecraft મા કયા કયા antennas વપરાય છે?
5. What is interworking?
૫. Interworking એટલે � ુ ?
6. What is Digital Video Broadcasting?
૬. Digital Video Broadcastingએટલે �?ુ
7. Which are the different Multimedia Standards?
૭. Multimedia Standardsકયા કયાછે?
8. Define Distributed SPC.
૮. Distributed SPC ની વ્યાખ્યા આપ.
9. Which are different ISDN standards.
૯. ISDN standardsકયા કયાછે?
10. Define Tandem Exchange.
૧૦. Tandem Exchange ની વ્યાખ્યા આપ.
Q.2 (a) Draw different signaling tone waveforms used for telephony. 03
પ�. ર (અ) Telephony મા વપરાતા િવિવધ signaling tone waveforms દોરો. ૦૩
OR
(a) Explain Synchronous duplex mode of Centralized SPC. 03
1/3
OR
(b) Draw schematic of a switching system showing logical connections 03
between
different elements of switching system.
(બ) સ્વીચ� સીસ્ટમનોlogical connections અને elements દશાર્વતોબ્લોક ૦૩
ડાયાગ્રામ દો
(c) Explain Multimedia Communication Model. 04
(ક) Multimedia Communication Model સમજવો. ૦૪
OR
(c) Which are different techniques to reduce storage requirement. 04
(ક) Storage requirement ને ઘટાડવા માટ� ની પધ્ધિતઓ સમજવો. ૦૪
(d) Which are the main features of DMS? 04
(ડ) DMS ના �ુખ્ય લક્ સમજવો. ૦૪
OR
(d) Explain the application of Distributed Multimedia. 04
(ડ) Distributed Multimedia ની ઊપયૉગીતા સમજવો. ૦૪
2/3
(b) Explain MPEG-4 coding of audio-visual objects. 04
(બ) Audio-Visual objects � ુ MPEG-4 coding સમ�વો. ૦૪
OR
(b) Explain speech and video transmission across GSM. 04
(બ) GSMથી થ� ુspeech and video transmissionસમ�વો. ૦૪
(c) Explain video transmission across IP network. 07
(ક) IP network થી થ� ુ video transmission સમ�વો. ૦૭
************
3/3