Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 126

1

Video Delivery Techniques


2
Server Channels
Videos are delivered to clients as a continuous
stream.
Server bandwidth determines the number of
video streams can be supported
simultaneously.
Server bandwidth can be organized and
managed as a collection of logical channels.
These channels can be scheduled to deliver
various videos.
3
Using Dedicated Channel
Video Server
Client
Too Expensive !
Client
Client
Client
4
Video on Demand Quiz
1. Video-on-demand technology has many
applications:

Electronic commerce
Digital libraries
Distance learning
News on demand
Entertainment
All of these applications
2. Broadcast can be used to substantially
reduce the demand on server bandwidth ?

True False
3. Broadcast cannot deliver videos on demand ?

True False


? ?
5
Push Technologies
Broadcast technologies can deliver
videos on demand.
Requirement on server bandwidth is
independent of the number of users
the system is designed to support.
If your answer to Question 3 was
True, you are wrong:
Less expensive & more scalable !!
6
Simple Periodic Broadcast
Staggered Broadcast Protocol

A new stream is started every interval for each video.
The worst service latency is the broadcast period.
Time
W
Channel 1
Channel 4
L
video i
video i
video i
video i
video i
video i
video i
video i
video j video j
Channel 5
video j video j
video j video j
Channel 7
W=L/N where N is the
number of channels
W=L/4
7
Simple Periodic Broadcast

A new stream is started every
interval for each video.

The worst service latency is the
broadcast interval.
Advantage: The bandwidth requirement
is proportional to the number of videos
(not the number of users.)
Can we do better ?
8
Limitation of
Simple Periodic Broadcast
Access latency can be improved only
linearly with increases to the server
bandwidth.
Substantial improvement can be
achieved if we allow the client to
preload data

9
Pyramid Broadcasting Segmentation
[Viswanathan95]
Each data segment D
i
is made o times the size of D
i-1
, for all i.
o = , where B is the system bandwidth; M is the number of videos; and K
is the number of server channels. o
opt
= 2.72 (Eulers constant).
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Channel 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2
1 2 3 4 1 2 3
1 2 3
Channel 2
Channel 3
Channel 4
time
broadcast
interval of
channel 1
S
i
z
e
s

i
n
c
r
e
a
s
e

g
e
o
m
e
t
r
i
c
a
l
l
y
The same video fragment of video 1
broadcast interval of channel 2
broadcast interval of channel 3
K M
B

10
Pyramid Broadcasting
Download & Playback Strategy
Server bandwidth is evenly divided among
the channels, each much faster than the
playback rate.
Client software has two loaders:
Begin downloading the first data segment at the
first occurrence, and start consuming it
concurrently.

Download the next data segment at the earliest
possible time after beginning to consume the
current data segment.
11
Disadvantages of Pyramid
Broadcasting
The channel bandwidth is substantially
larger than the playback rate
Huge storage space is required to
buffer the preloaded data
It requires substantial client
bandwidth
Client bandwidth is typically the most expensive component of a VOD system
12
Permutation-Based Pyramid
Broadcasting (PPB) [Aggarwal96]
PPB further partitions each logical
channel in PB scheme into P
subchannels.
A replica of each video fragment is
broadcast on P different subchannels
with a uniform phase delay.
C
h
a
n
n
e
l

C
i

Video V
1
Video V
2
A subchannel
Pause to allow
the playback to
catch up
Begin downloading
Resume downloading
13
Advantages and
Disadvantages of PPB
Requirement on client bandwidth is
substantially less than in PB
Storage requirement is also reduced
significantly (about 50% of the video
size)
The synchronization is difficult to
implement since the client needs to tune
to an appropriate point within a
broadcast
14
Each video is fragmented into K segments, each
repeatedly broadcast on a dedicated channel at
the playback rate.

The sizes of the K segments have the following
pattern:

[1, 2, 2, 5, 5, 12, 12, 25, 25, , W, W, , W]
Skyscraper Broadcasting [Hua97]
Size of larger segments are constrained
to W (width of the skyscraper).
Even group Odd group
w
Latency Service

~
segment #
length video
15
Generating Function
The broadcast series is generated using
the following recursive function:
1 If n = 1,
2 If n = 2 or 3,
2 f(n - 1)+1 If n mod 4 = 0.
f(n-1) If n mod 4 = 1,
2 f(n - 1) + 2 If n mod 4 = 2,
f(n 1) If n mod 4 = 3
f(n) =
16
The Odd Loader and the Even Loader
download the odd groups and the even
groups, respectively.
The W-segments are downloaded
sequentially using only one loader.
As the loaders fill the buffer, the Video
Player consumes the data in the buffer.

Skyscraper Broadcasting
Playback Procedure
17
Advantages of Skyscraper
Broadcasting
Since the first segment is very short,
service latency is excellent.
Since the W-segments are
downloaded sequentially, buffer
requirement is minimal.

Pyramid Skyscraper
18
SB Example
Blue people share 2nd and 3rd fragments
and 6th, 7th, 8th with Red people.
1
2
3
4
5
6
7
8
19
6
2
3
time
5
7
.
.
.
4
2 3 4 5 6 7 16
. . .
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel 6
Channel 7
Channel 15
Channel 16
Playback Schedule:
Loader 1
Loader 1
Loader 1
Loader 1
Loader 1
Loader 1
Loader 2
Loader 2
Loader 2
Buffer
Loader 1
Loader 2
Video
Player
Broadcast
Channels
A
n
o
t
h
e
r

A
p
p
r
o
a
c
h

20
CCA Broadcasting
Server broadcasts each
segment at the playback
rate
Clients use c loaders
Each loader download its
streams sequentially, e.g.,
i
th
loader is responsible for
segments i, i+c, i+2c, i+3c,

Only one loader is used to
download all the equal-size
W-segments sequentially
W
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel K
Group 1
Group 2
Group i
C = 3 (Clients have three loaders)

=
=
=
=
. 1 ) ( mod ) 1 (
, 1 ) ( mod ) 1 ( 2
, 1 1
) (
c n n f
c n n f
n
n f
21
Advantages of CCA
It has the advantages of Skyscraper
Broadcasting.
It can leverage client bandwidth to
improve performance.

22
Cautious Harmonic Broadcasting
(Segmentation Design)
A video is partitioned into n equally-sized
segments.
The first channel repeatedly broadcasts the
first segment S
1
at the playback rate.
The second channel alternately broadcasts S
2

and S
3
repeadtedly at the playback rate.
Each of the remaining segment S
i
is
repeatedly broadcast on its dedicated channel
at 1/(i1) the playback rate.

23
Cautious Harmonic
Broadcasting
(Playback Strategy)
The client can start the playback as soon as
it can download the first segment.
Once the client starts receiving the first
segment, the client will also start receiving
every other segment.
24
Cautious Harmonic
Broadcasting
Advantage: Better than SB in terms of service
latency.
Disadvantage: Requires about three times more
receiving bandwidth compared to SB.
Implementation Problem:
The client must receive data from many channels
simultaneously (e.g., 240 channels are required for
a 2-hour video if the desired latency is 30
seconds).
No practical storage subsystem can move their
read heads fast enough to multiplex among so many
concurrent streams.
25
Pagoda Broadcasting
Download and Playback Strategy
Each channel broadcasts data at the
playback rate
The client receives data from all channels
simultaneously.
It starts the playback as soon as it can
download the first segment.


26
Pagoda Broadcasting
Advantage & Disadvantage
Advantage: Required server bandwidth is low
compared to Skyscraper Broadcasting
Disadvantage: Required client bandwidth is many
times higher than Skyscraper Broadcasting
Achieving a maximum delay of 138 seconds for
a 2-hour video requires each client to have a
bandwidth five times the playback rate, e.g.,
approximately 20 Mbps for MPEG-2
System cost is significantly more expensive


27
New Pagoda Broadcasting
[Paris99]
New Pagoda Broadcasting improves on
the original Pagoda Broadcasting.
Required client bandwidth remains
very high
Example: Achieving a maximum delay of 110
seconds for a 2-hour video requires each client to
have a bandwidth five times the playback rate.
Approximately 20 Mbps for MPEG-2
System cost is very expensive


28
Limitations of Periodic
Broadcast
Periodic broadcast is only good for
very popular videos
It is not suitable for a changing
workload
It can only offer near-on-demand
services
29
Batching
FCFS

MQL (Maximum Queue Length First)




MFQ (Maximum Factored Queue Length
Resources
Waiting queue f or v ideo i
new
request
longest queue length
Resources
Waiting queue f or v ideo i
new
request
l
f
l
l
n
n
1
1
f
is the largest
1
Access frequency
of video 1
7 7 7 Resources
Waiting queue
new
request
Still only
near VoD !

Can
multicast
provide
true VoD ?
30
Current Hybrid Approaches
FCFS-n : First Come First Served for
unpopular video and n channels are
reserved for popular video.
MQL-n : Maximum Queue Length policy
for unpopular video and n channels are
reserved for popular video.
Performance is limited.

31
New Hybrid Approach
Skyscraper
Broadcasting
scheme (SB)
Largest
Aggregated
Waiting Time
First (LAW)
Periodic Broadcast Scheduled Multicast
+
32
LAW
(Largest Aggregated Waiting Time First)
MFQ tends to MQL; loosing fairness
q
1
/ \ f
1
, q
2
/ \ f
2
, q
3
/ \ f
3
, q
4
/ \ f
4
,
f
1
~ f
2
~ f
3
~ f
4
~ ...
q
1
, q
2
, q
3
, q
4
, ...
Whenever a stream becomes available, schedule the
video with the maximum value of S
i
:
S
i
= c * m - (a
i1
+ a
i2
+ + a
im
),
where c is current time,
m is total number of requests for video i,
a
ij
is arrival time of jth request for video i.
(Sum of each requests waiting time in the queue)

33
LAW (Example)
By MFQ, q
1*
At
1
= 5*(128-106)=110,
q
2*
At
2
= 4*(128-100)=112. selected By MFQ
Average waiting times are 12 and 8 time units.
S
1
= 128*5 - (107+111+115+121+126) = 60 selected
S
2
= 128*4 - (112+119+122+127) = 32 by LAW

Request for video no.1
106 107 111 115 121 126 128
Time
last multicast
Request for video
no.2
100 112 119 122 127 128
Time
last multicast
Current
Current
R
11
R
12
R
13
R
14
R
15
R
21
R
22
R
23
R
24
34
AHA (Adaptive Hybrid Approach)
Popularity is re-evaluated periodically.
If a video is popular so broadcasting by SB currently,
then go Case.1. Otherwise, go Case.2.

Video is popular
Video is
popular
?
Terminate the SB
broadcast after all
the dependent
playbacks end
Mark the
waiting
queue as an
LAW queue
Return the
channels to
the channel
pool
Yes
No
K channels
available ?

Initiate the
SB
broadcast
Yes
No
Case.1
Case.2
Video is popular
Video is
popular
?
No
Yes
Mark the
waiting
queue as an
LAW queue
The video is assumed to
require K logical
channels
35
Performance Model
100 videos (120 min. each),
Client Behavior follows
- the Zipf distribution (z = 0.1 ~ 0.9) for choice of videos,
- the Poisson distribution for arrival time.
- popularity is changing gradually every 5 min for
dynamic environment.
- for waiting time, = 5 min., s = 1 min.
Performance Metrics
- Defection Rate,
- Average access latency,
- Fairness, and
- Throughput.
36
LAW vs. MFQ
Varying Request rate
0
1
2
3
4
5
6
7
8
5 7.5 10 12.5 15 20 30
Request Arrival rate (requests/min.)
U
n
f
a
i
r
n
e
s
s
MFQ
LAW
Varying Server Capacity
0
1
2
3
4
5
6
7
8
300 400 500 600 700 800 900
Server Capacity (channels)
U
n
f
a
i
r
n
e
s
s
MFQ
LAW
Varying Skew Factor
0
1
2
3
4
5
6
7
8
0.1 0.2 0.3 0.4 0.5
Skew Factor (Z)
U
n
f
a
i
r
n
e
s
s
MFQ
LAW
37
AHA vs. MFQ-SB-n
Average Latency
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
A
v
e
r
a
g
e

L
a
t
e
n
c
y

(
m
i
n
.
)
MFQ-SB-n
AHA
Throughput
0
5
10
15
20
25
30
35
40
45
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
T
h
r
o
u
g
h
p
u
t
MFQ-SB-n
AHA
Defection Rate
0
10
20
30
40
50
60
70
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
D
e
f
e
c
t
i
o
n

R
a
t
e

(
%
)
MFQ-SB-n
AHA
Unfairness
0
1
2
3
4
5
6
7
8
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
U
n
f
a
i
r
n
e
s
s
MFQ-SB-n
AHA
38
Low Latency: requests must be
served immediately
Challenges conflicting goals
Highly Efficient: each multicast
must still be able to serve a large
number of clients
39
Some Solutions
Application level:
Piggybacking
Patching
Chaining
Network level:
Caching Multicast Protocol
(Range Multicast)
40
Piggybacking
[Golubchik96]

new arrivals
departures
+5% -5%
C B A


Slow down an earlier service and speed up
the new one to merge them into one stream
Limited efficiency due to long catch-up
delay
Implementation is complicated
41
Patching
Regular
Multicast
Video
A
42
Proposed Technique: Patching
Regular
Multicast
A
Video Player Buffer
B
Video
t
Patching Stream
Skew point
43
Proposed Technique: Patching
Regular
Multicast
A
Buffer
B
Video
2t
Skew point is absorbed by client buffer
Video Player
44
Client Design
Video
Server
Lr
Video
Player
Regular Multicast
Patching Multicast
Data Loader
Regular
Stream
Patching
Stream
Client A
Lr Lp
Video
Player
Client B
Buffer
Lr Lp
Video
Player
Client C
45
Server Design
Server must decide when to schedule a regular
stream or a patching stream

A
r
B
p
C
p
D
p
E
r
F
p
G
p
Multicast group Multicast group
time
46
Two Simple Approaches
If no regular stream for the same
video exists, a new regular stream
is scheduled

Otherwise, two policies can be used
to make decision: Greedy Patching
and Grace Patching
47
Greedy Patching
Patching stream is always scheduled
Video Length
Shared Data
Buffer Size
Shared Data
Buffer Size
Shared Data
A
B
A
C
48
Grace Patching
If client buffer is large enough to absorb
the skew, a patching stream is scheduled;
otherwise, a new regular stream is
scheduled.
Video Length
Buffer Size
Regular Stream
A
Shared Data
B
C
49
Local Distribution Technologies
Video
Server
Video
Server
ATM or Sonet
backbone network
Switch
Switch
Local
distribution
network
Local
distribution
network
Client
Client
Client
Client
ADSL (Asymmetric Digital Subscriber Line): currently 8 Mbps in one direction,
and eventually speeds as high as 50 Mbps
HFC (Hybrid Fiber Coax): current 300-450 Mhz coax cables are replaced by
750 mhz coax cable to achieve a total of 2 Gbps
50
Performance Study
Compared with conventional batching
Maximum Factored Queue (MFQ) is used
Two scenarios are studied
No defection
average latency
Defection allowed
average latency, defection rate, and unfairness
51
Simulation Parameters
Request rate (requests/min)
Client buffer (min of data)
Server bandwidth (streams)
Video length (minutes)
Number of videos
Parameter
50 10-90
5 0-10
1,200 400-1,800
90 N/A
100 N/A
Default Range
Video Access Skew factor 0.7 N/A
Number of requests 200,000 N/A
52
Effect of Server Bandwidth
Client Buffer Request Rate Defection
5 minutes 50 arrivals/minute No
0
100
200
300
400
500
600
400 600 800 1000 1200 1400 1600 1800
A
v
e
r
a
g
e

L
a
t
e
n
c
y

(
S
e
c
o
n
d
s
)

Server Communication BW (streams)
Conventional Batching
Greedy Patching
Grace Patching
53
Effect of Client Buffer
Server Bandwidth Request Rate Defection
1,200 streams 50 arrivals/minute No
0
20
40
60
80
100
120
140
160
180
200
0 1 2 3 4 5 6 7 8 9 10
A
v
e
r
a
g
e

L
a
t
e
n
c
y

(
s
e
c
o
n
d
s
)

Client Buffer Size (minutes of data)
Conventional Batching
Greedy Patching
Grace Patching
54
Effect of Request Rate
0
50
100
150
200
250
10 20 30 40 50 60 70 80 90 100 110
A
v
e
r
a
g
e

L
a
t
e
n
c
y

(
s
e
c
o
n
d
s
)

Request Rate (requests/minutes)
Server Bandwidth Client Buffer Defection
1,200 streams 5 minutes No
Conventional Batching
Greedy Patching
Grace Patching
55
Optimal Patching
A
r
B
p
C
p
D
p
E
r
F
p
G
p
patching window patching window
Multicast group Multicast group
time
What is the optimal patching window ?
56
Optimal Patching Window
D is the mean total amount of data
transmitted by a multicast group
Minimize Server Bandwidth Requirement,
D/W , under various W values
Video Length
Buffer Size
Buffer Size
A
W
57
Optimal Patching Window
Compute D, the mean amount of data
transmitted for each multicast group
Determine t , the average time
duration of a multicast group
Server bandwidth requirement is D/t
which is a function of the patching
period
Finding the patching period that
minimize the bandwidth requirement
58
Candidates for Optimal
Patching Window
59
Concluding Remarks
Unlike conventional multicast, requests
can be served immediately under
patching

Patching makes multicast more efficient
by dynamically expanding the multicast
tree

Video streams usually deliver only the
first few minutes of video data

Patching is very simple and requires no
specialized hardware
60
Patching on Internet
Problem:
Current Internet does not support
multicast

A Solution:
Deploying an overlay of software
routers on the Internet
Multicast is implemented on this
overlay using only IP unicast

61
Content Routing
Root
Router
Router
A
Router
B
Router
E
Router
C
Router
D
Client
Client
Find (1)
Find (2)
F
i
n
d
Router
D
My
Router ?
No
Yes
Server
Client
Video
stream
Each router
forwards its
Find messages
to other
routers in a
round-robin
manner.


62
Removal of An Overlay Node
A
B
C
D
G F
E
Client
A
B
C
D
G
E
Client
Before adjustment After adjustment
Server Server
Inform the child nodes to reconnect to the
grandparent
63
Failure of Parent Node
A
B
C
D
G F
E
Client
A
B
C
D
G
E
Client
After adjustment Before adjustment
Data stop coming from the parent
Reconnect to the server
64
Slow Incoming Stream
A
B
C
D
G F
E
A
B
C
D
G F
E
Before adjustment After adjustment
S
l
o
w
Reconnect upward to the
grandparent
65
Downward Reconnection
A
B
C
D
G F
E
Before adjustment After adjustment
A
B
C
D
G F
E
Slow
Slow
When reconnection reaches the server, future
reconnection of this link goes downward.
Downward reconnection is done through a sibling node
selected in a round-robin manner.
When downward reconnection reaches a leave node,
future reconnection of this link goes upward again.
66
Limitation of Patching
The performance of Patching is limited
by the server bandwidth.
Can we scale the application beyond
the physical limitation of the server ?
67
Using a hierarchy of multicasts
Clients multicast data to other clients in the
downstream
Demand on the server-bandwidth
requirement is substantially improved
Batch
3
Batch
1
Batch 2
A virtual
batch
Dedicated Channels Multicast Chaining
Only one
video stream
3 video
streams
7 video
streams
client
V
i
d
e
o

s
e
r
v
e
r
V
i
d
e
o

s
e
r
v
e
r
V
i
d
e
o

s
e
r
v
e
r
Network
cache
68
Chaining
Highly scalable and efficient
But implementation is a challenge
Video Server
disk
Screen
disk
Screen
Screen
disk
Client A
Client B
Client C
69
Scheduling Multicasts
Conventional Multicast



I State: The video has no pending requests.
Q State: The video has at least one pending request.

Chaining






C State: Until the first frame is dropped from the multicast tree, the tree
continues to grow and the video stays in the C state.
I Q
request arrives
grant resources
request
arrives
I Q
request arrives
grant
resources
request
arrives
C
request
arrives
drop
the first
frame
70
Enhancement
When resources become available, the service begins for
all the pending requests except for the youngest one.

As long as new requests continue to arrive, the video
remains in the E state.

If the arrival of the requests momentarily discontinues for
an extended period of time, the video transits into the C
state after initiating the service for the last pending
request.
E State:
grant
resources
request
arrives
request
arrives
drop
the first
frame
I Q
request arrives
E
C
request arrives
serve the last
pending request
This strategy returns to the I state much less frequently.
It is less demanding on the server bandwidth.
71
Advantages of Chaining
Requests do not have to wait for the next
multicast.
Better service latency
Clients can receive data from the expanding
multicast hierarchy instead of the server.
Less demanding on server bandwidth
Every client that uses the service
contributes its resources to the distributed
environment.
Scalable
72
Chaining is Expensive ?
Each receive end must have caching
space.
56 Mbytes can cache five minutes
of MPEG-1 video
The additional cost can easily pay
for itself in a short time.
73
Limitation of Chaining
It only works for a collaborating
environment
i.e., the receiving nodes are on all the
time
It conserves server bandwidth, but
not network bandwidth.
74
Another Challenge
Can a multicast deliver the entire
video to all the receivers who may
subscribe to the multicast at
different times ?
If we can achieve the above
capability, we would not need to
multicast too frequently.
75
Range Multicast [Hua02]
Deploying an overlay of software routers on
the Internet
Video data are transmitted to clients
through these software routers
Each router caches a prefix of the video
streams passing through
This buffer may be used to provide the
entire video content to subsequent clients
arriving within a buffer-size period
76
Range Multicast Group
Caching Multicast Protocol (CMP)
C1 C3
C4 C2
Video
Server
0
7
8
11
0
0
0
7
7
7
8
8
11
R
5
R
6
R
3
R
4
R
1
R
7
R
8
R
2
Root
Four clients join the
same server stream
at different times
without delay
Each client sees the
entire video
Buffer Size: Each
router can cache 10
time units of video
data.
Assumption: No
transmission delay
77
Multicast Range
All members of a
conventional multicast
group share the same
play point at all time
They must join at the
multicast time
Members of a range
multicast group can have
a range of different
play points
They can join at their
own time
Multicast Range at time 11: [0, 11]
C1 C3
C4 C2
Video
Server
0
7
8
11
0
0
0
7
7
7
8
8
11
R
5
R
6
R
3
R
4
R
1
R
7
R
8
R
2
Root
78
Network Cache
Management
Initially, a cache chunk is
free.
When a free chunk is
dispatched for a new
stream, the chunk
becomes busy.
A busy chunk becomes hot
if its content matches a
new service request.
free
busy
hot
New stream
arrives
A service
request arrives
before the chunk
is full
Last
service
ends
A service
request arrives
before the chunk
is full
Replaced
by a new
stream
79
CMP vs. Chaining
Video
Server
Router
R1
Router
R2
Router
R3
Router
R4
Router
R10
Router
R5
Router
R8
Router
R9
Router
R6
Router
R7
C1
C3
C4 C2
0
7
8
11
11
7
7
7
8
8
0
0
0
0
7
7
8
8
8
11
11
11 11
8
8
7
11
Video
Server
Router
R1
Router
R2
Router
R3
Router
R4
Router
R10
Router
R5
Router
R8
Router
R9
Router
R6
Router
R7
C1
C3
C4 C2
0
7
8
11
11
7
7
7
8
8
0
0
0
0
Chaining CMP
Assumption: Each router has one chunk of storage space
capable of caching 10 time units of video.
80
CMP vs. Proxy Servers
Proxy servers are
placed at the edge of
the network to serve
local users.
CMP routers are
located throughout
the network for all
users to share.
Proxy servers are
managed autonomously.
The router caches
are seen
collectively as a
single unit.
Proxy Servers CMP
81
CMP vs. Proxy Servers
Popular data are
heavily duplicated if
we cache long videos.
CMP routers cache
only a small leading
portion of the video
passing through
Caching long videos is
not advisable. Many
data must still be
obtained from the
server
Majority of the data
are obtained from
the network.
Proxy Servers CMP
82
VCR-Like Interactivity
Continuous Interactive functions
Fast forward
Fast rewind
Pause
Discontinuous Interactive functions
Jump forward
Jump backward

Useful for many VoD applications
VCR Interaction Using Client Buffer
Play Point
N
N + 20
N+2 N+22
N+4
N+24
Before
Play
Pause
4X Fast Forward
N+6
N+26
N+6
N+26
Jump Backward
Video stream
Video stream
Video stream
Video stream
Video stream
84
Interaction Using Batching [Almeroth96]
Requests arriving during a time slot form a
multicast group
Jump operations can be realized by switching to
an appropriate multicast group
Use an emergency stream if a destination
multicast group does not exist
Emergency Stream
Batching Period
time
J
u
m
p
J
u
m
p
J
u
m
p
Continuous Interactivity
under Batching
Pause:
Stop the display
Return to normal play as in Jump
Fast Forward:
Fast forward the video frames in the buffer
When the buffer is exhausted, return to
normal play as in Jump
Fast Rewind:
Same as in fast forward, but in reverse
direction
SAM (Split and Merge) Protocol
[Liao97]
Uses 2 types of streams, S streams for
normal multicast and I streams for
interactivity.
When a user initiates an interactive
operation:
Use an I channel to interact with the video
When done, use the I channel as a patching
stream to join an existing multicast
Return the I channel
Advantage: Unrestricted fast forward and rewind
Disadvantage: I streams require substantial bandwidth
87
Resuming Normal Play in SAM
6 2 5 4 3 1
7 6 2 5 4 3 1
7 6 2 5 4 3 1
7 6 2 5 4 3 1
Original S Stream
Ineligible S Stream
Segment 6 is in the future
Now
Point to resume
normal play
Targeted S Stream
Enough buffer to cache
segments 8 and 9
d
Ineligible S Stream
d > buffer size, patching cannot help
d
Use the I stream
to download
segments 6 and 7,
and render them
onto the screen
At the same time,
join the target
multicast and
cache the data,
starting from
segment 8, in
a local buffer
88
Interaction with Broadcast
Video
The interactive techniques
developed for Batching can also be
used for Staggered Broadcast
However, Staggered Broadcast does
not perform well

89
Client Centric Approach (CCA)
Server broadcasts each
segment at the playback
rate
Clients use c loaders
Each loader download its
streams sequentially, e.g., i
th
loader is responsible for
segments i, i+c, i+2c, i+3c,

Only one loader is used to
download all the equal-size
W-segments sequentially
W
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel K
Group 1
Group 2
Group i
C = 3 (Clients have three loaders)

=
=
=
=
. 1 ) ( mod ) 1 (
, 1 ) ( mod ) 1 ( 2
, 1 1
) (
c n n f
c n n f
n
n f
90
CCA is Good for Interactivity
Segments in the same
group are downloaded
at the same time
Facilitate fast forward
The last segment of
a group is of the
same size as the
first segment of the
next group
Ensure smooth
continuous playback
after interactivity



Broadcast point
Seg 1 of size 1
Seg 2 of size 2
Seg 3 of size 2
Seg 4 of size 5
Seg 5 of size 5
Seg 6 of size 12
Seg 7 of size 12
Gr 1
Gr2
Gr3
Gr4
After a Jump Action : Skyscraper does not guarantee a smooth play back
Broadcast point
Group 1
Group2
Group3
After a Jump Action : CCA technique always guarantees a smoth play back
Desired destination point
Actual destination point
Downloaded by Odd Loader
Downloaded by Even Loader
Missing data
Desired destination point
Actual destination point
Downloaded by
Loader 3
Downloaded by
Loader 1
91
Broadcast-based Interactive
Technique (BIT) [Hua02]
Cr
1
Cr
2
Cr
3
Cr
4
Ci
1
Cr
5
Cr
Kr-3
Cr
Kr-2
Cr
kr-1
Cr
Kr
Ci
ki
W
Group 1
Group K
i
An interactive
channel
broadcasts a
compressed
version of the
data in the
group
92
BIT
End of
video ?
Render next frame
in Normal Buffer
Interaction
initiated ?
Continuous
operation ?
Destination point in
Normal Buffer ?
Render next frame in
Interactive Buffer
Resume
normal play ?
Interactive Buffer
exhausted ?
Load appropriate group to keep resume point near middle of Normal Buffer
Jump to
destination point
No
Yes
No
Yes
No
Yes
Yes
No
No
No
Yes
Yes END
BEGIN
Two Buffers
Normal Buffer
Interactive
Buffer
When Interactive
Buffer is
exhausted, client
must resume
normal play

93
BIT
Resume-Play
Operation
case 1 case 2
Broadcast point
Broadcast point
Broadcast point
Broadcast point
Desired destination
Desired destination
Desired
destination
Broadcast point Broadcast point
Desired destination
Desired destination
case 3 case 4
case 5
case 6
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
Destination point
Desired destination
case 7 case 8
Broadcast point Broadcast point
Actual
destination
Actual
destination
Desired destination
Actual
destination
Actual
destination
Actual
destination
Actual
destination
Actual
destination
Actual
destination
Three segments are being
downloaded simultaneously

Actual destination point is
chosen from among frames
at broadcast point to
ensure continuous playback
94
BIT - User Behavior Model
Play
m
p
Fast Reverse
m
fr
Fast Forward
m
ff
Pause
m
pause
Jump
Forward
m
jf
'
Jump
Backward
m
jb
'
1
1
1
1
1
P
play
P
fr
P
ff
P
pause
P
jb
P
jf
m
x
: duration of action x
P
x
: probability to issue action x
P
i
: probability to issue
interaction
m
i
: duration of the interaction
m
ff
= m
fr
= m
pause
= m
jf
= m
jb
,
P
pause
= P
ff
= P
fb
= P
jf
= P
jb
=
P
i
/5.
dr : m
i
/m
p
interaction ratio.
Performance Metrics

Percentage of unsuccessful action
Interaction fails if the buffer fails to
accommodate the operation
E.g., a long-duration fast forward pushes the
play point off the Interactive Buffer
Average Percentage of Completion
Measure the degree of incompleteness
E.g., if a 20-second fast forward is forced
to resume normal play after 15 seconds, the
Percentage of Completion is 15/20, or 75%.
96
BIT - Simulation
Results
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
0.5 1 1.5 2 2.5 3 3.5
Duration ratio
A
v
e
r
a
g
e

P
e
r
c
e
n
t
a
g
e

o
f

C
o
m
p
l
e
t
i
o
n
Active Buffer
Management
BIT
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.5 1 1.5 2 2.5 3 3.5
Duration ratio
P
e
r
c
e
n
t
a
g
e

o
f

U
n
s
u
c
c
e
s
s
f
u
l

A
c
t
i
o
n
s
Active Buffer
Management
BIT
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
0.65
1 2 3 4 5 6 7
regular buffer size
P
e
r
c
e
n
t
a
g
e

o
f

U
n
s
u
c
c
e
s
s
f
u
l

A
c
t
i
o
n
s
A.B.M, d_ratio =1
BIT, d_ratio =1
A.B.M, d_ratio =1.5
BIT, d_ratio = 1.5
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1 2 3 4 5 6 7
regular buffer size
A
v
e
r
a
g
e

P
e
r
c
e
n
t
a
g
e

o
f

C
o
m
p
l
e
t
i
o
n
A.B.M, d_ratio =1
BIT, d_ratio 1
A.B.M, d_ratio =1.5
BIT, d_ratio = 1.5
97
Support Client Heterogeneity
Using multi-resolution encoding
Bandwidth Adaptor
HeRO Broadcasting
98
Multi-resolution Encoding
Encode the video data as
a series of layers
A user can individually
mould its service to fit its
capacity
A user keeps adding
layers until it is
congested, then drops the
higher layer
Drawback: Compromise the display quality
99
Bandwidth Adaptors
Video Server
Client Client Client
Bandwidth Adaptor
Average
bandwidth
Average
bandwidth
More bandwidth
Client
Client
Bandwidth Adaptor
Bandwidth Adaptor
Client
Less bandwidth
Even less
bandwidth
Even less
bandwidth
Server-end Adaptor
Client-end Adaptor
Advantage: All clients enjoy the
same quality display
100
Requirements for an Adaptor
An adaptor dynamically transforms a
given broadcast into another less
demanding one
The segmentation scheme must allow
easy transformation of a broadcast into
another
CCA segmentation technique has this
property
101
Two Segmentation
Examples
102
Adaptation (1)
Server
Sender routine 1
Sender routine 2
Sender routine K
s
Segment 1
Segment 2
Segment K
s
Adaptor
Channel 1
71 70 69
Segment 1
Buffer space
68
InsertChunk(68)?
Yes
No. Ignore chunk.
Loader Routine 1
Adaptor downloads from all broadcast channels simultaneously
103
Adaptation (2)
Adaptor
Sender routine K
a
Segment K
a
369 368 367
DeleteChunk(370)?
No. Just send.
Yes. Send and
delete from buffer.
Each sender routine
retrieves data chunks
from buffer, and
broadcast them to
the downstream
For each chunk, the
sender routine calls
deleteChunk to decide
if the chunk can be
deleted from the
buffer

104
Buffer Management
insertChunk implements an As Late As Possible
policy, i.e.,
If another occurrence of this chunk will be available
from the server before it is needed, then ignore this
one, else buffer it.
deleteChunk implements an As soon As Possible
policy, i.e.,
Determine the next time when the chunk will need to
be broadcast to the downstream.
If this moment comes before the availability of the
chunk at the server, then keep it in storage, else
delete it.

105
The Adaptor Buffer
Computation is not intensive.
It is only performed for the first
chunk of the segment, i.e.,
If this initial chunk is marked for
caching, so will be the rest of the
segment.
Same thing goes for deletion.
106
The start-up delay
The start-up delay is the broadcast period of
the first segment on the server
107
HeRO Heterogeneous
Receiver-Oriented Broadcasting
Allows receivers of various
communication capabilities to share
the same periodic broadcast
All receivers enjoy the same video
quality
Bandwidth adaptors are not used
108
HeRO Data Segmentation
The size of the i
th
segment is 2
i-1

times the size of the first segment
109
HeRO Download Strategy
The number of channels needed depends on the time slot of
the arrival of the service request
Loader i downloads segments i, i+C, i+2C, i+3C, etc.
sequentially, where C is the number of loaders available.

Global Period
110
HeRO Regular Channels
The first user can download from six
channels simultaneously
Request 1
111
HeRO Regular Channels
The second user can download from
two channels simultaneously
Request 2
112
Worst-Case for Clients with 2 loaders
Worst-case latency is 11 time units
The worst-cases appear because the broadcast
periods coincide at the end of the global period

Request 2
Coincidence of the
broadcast periods
11 time units
113
Worst-Case for Clients with 3 loaders
Worst-case latency is 5 time units
The worst-cases appear because the broadcast
periods coincide at the end of the global period

Request
5 time units
Coincidence of the
broadcast periods
114
Observations of Worst-Cases
For a client with a given bandwidth,
the time slots it can start the video
are not uniformly distributed over
the global period.
The non-uniformity varies over the
global period depending on the
degree of coincidence among the
broadcast periods of various
segments.
115
Observations of Worst-Cases
(cont)
The worst non-uniformity occurs at the
end of each global period when the
broadcast periods of all segments
coincide.
The non-uniformity causes long service
delays for clients with less bandwidth.
We need to minimize this coincidence to
improve the worst case.
116
We broadcast the last segment on one more
channel, but with a time shift half its size.
We now offer more possibilities to download the last
segment; and above all, we eliminate every
coincidence with the previous segments.
Regular
Group
Shifted
Channel
Adding one more channel
117
Shifted
Channels
To reduce service latency for less
capable clients, broadcast the longest
segments on a second channel with a
phase offset half their size.

Channel 1
Channel 2
Channel 3
Channel 4
Channel 5a
0 1 2 3 4 5 6 7 8 9 11 10 13 12 15 14 17 16 18 20 19 22 21 24 23 25 27 26 29 28 31 30 32
Shift = D5/2
Channel 5b
Channel 6a
Channel 6b
Shift = D6/2
The t unit is D1
t
4 1 2 2 3 2 2 3 4 2 2 2 3 2 2 3 4 1 2 2 3 2 2 3 4 2 2 2 3 2 2 3
HeRO
118
Under a homogeneous environment, HeRO
is
very competitive in service latencies compared
to the best protocols to date
the most efficient protocol to save client
buffer space
HeRO is the first periodic broadcast
technique designed to address the
heterogeneity in receiver bandwidth
Less capable clients enjoy the same
playback quality
HeRO Experimental Results
119
2-Phase Service Model
(2PSM)

Browsing Videos in a Low
Bandwidth Environment
120
Search Model
Use similarity matching (e.g.,
keyword search) to look for the
candidate videos.

Preview some of the candidates to
identify the desired video.

Apply VCR-style functions to search
for the video segments.
121
Conventional Approach
Advantage: Reduce wait time
1. Download S
o

2. Download S
1

while playing S
0

3.
Download S
2

while playing S
1

.
.
.
Disadvantage:
Unsuitable for video libraries.
S
1
S
2
S
3
...
...
display S
0
display S
1
display S
2
S
0
S
1
S
2
S
3
Server
Client
S
0
Time
122
Search Techniques
Use extra preview files to support the
preview function.

Use separate fast-forward and fast-
reverse files to provide the VCR-style
operations.

It requires more storage space.
Downloading the preview file adds delay to the service.
It requires more storage space.
Server can becomes a bottleneck.
123
Challenges
How to download the preview frames for FREE ?
No additional delay
No additional storage requirement
How to support VCR operations without VCR files ?
No overhead for the server
No additional storage requirement
124
2PSM Preview Phase
0
1
2
3
4
6
7
8
9
10
12 18
13
14
15
16
5 11 17
19
20
21
22
23
25
26
27
28
29
24 30 36 42 48
31
32
33
34
35
37
38
39
40
41
43
44
45
46
47
54 60 66 72 78 84 90
49
50
51
52
53
55
56
57
58
59
61
62
63
64
65
67
68
69
70
71
73
74
75
76
77
79
80
81
82
83
85
86
87
88
89
91
92
93
94
95
96
97
98
99
100
102
103
104
105
106
108 114
109
110
111
112
101 107 113
115
116
117
118
119
121
122
123
124
125
120 126 132 138 144
127
128
129
130
131
133
134
135
136
137
139
140
141
142
143
150 156 186 162 168 174 180
145
146
147
148
149
151
152
153
154
155
158
157
189 159
190 160
187
161
188
191
163
164
165
166
167
169
170
171
172
173
175
176
177
178
179
181
182
183
184
185
downloaded
during Step 1
downloaded
during Step 2
downloaded
during Step 3
L
R
.
.
.
.
.
.
.
.
.
GOFs available for previewing after 3 steps
The preview quality improves gradually.
.
.
.
90
114
66 18
42
162
138
174 150 126 102 78 54 30 6
downloaded
during Step 4
125
2PSM Playback Phase
Server
. . .
PU
0
PU
1
PU
2
PU
3
PU
4
PU
5
PU
6
L
3
L
4
L
5
L
6
L
0
L
1
L
2
L
3
L
4
L
5
L
6
L
7
L
7
R
2
R
3
R
5
R
6
R
1
R
0
R
2
R
3
R
4
R
4
R
5
R
6
PU
0
Client
Download during Initialization Phase Download during Playback Phase
PU
1
. . .
display
display
display
display
display
display
display
PU
2
PU
3
PU
4
PU
5
PU
6
R
0
L
0
L
1
R
1
L
2
t
126
Remarks
1. It requires no extra files to provide the
preview feature.
2. Downloading the preview frames is free.
3. It requires no extra files to support
the VCR functionality.
4. Each client manages its own VCR-style
interaction. Server is not involved.

You might also like