SAHARA: A Revolutionary Service Architecture For Future Telecommunications Systems

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 51

SAHARA: A Revolutionary

Service Architecture for Future


Telecommunications Systems

Randy H. Katz, Anthony Joseph


Computer Science Division
Electrical Engineering and Computer Science Department
University of California, Berkeley
Berkeley, CA 94720-1776
Project Goals
• Delivery of end-to-end services with
desirable properties (e.g., performance,
reliability, “qualities”), provided by multiple
potentially distrusting service providers
• Architectural framework for
– Economics-based resource allocation
– Third-party mediators, such as Clearinghouses
– Dynamic formation of service confederations
– Support for diverse business models
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
The Huge Expense of New
Telecomms Infrastructures
• European auctions for 3G spectrum: 50
billion ECU and counting
• Capital outlays likely to match spectrum
expenses, all before the first ECU of
revenue!
• Compelling motivation for collaborative
deployment of wireless infrastructure
Any Way to Build
a Network?
• Partitioning of frequencies independent of
actual subscriber density
– Successful operator oversubscribe resources, while
less popular providers retain excess capacity
– Different flavor of roaming: among
collocated/competing service providing
• Duplicate antenna sites
– Serious problem given community resistance
• Redundant backhaul networks
– Limited economies of scale
The Case for Horizontal
Architectures
“The new rules for success will be to provide one
part of the puzzle and to cooperate with other
suppliers to create the complete solutions that
customers require. ... [V]ertical integration breaks
down when innovation speeds up. The big telecoms
firms that will win back investor confidence
soonest will be those with the courage to rip apart
their monolithic structure along functional layers,
to swap size for speed and to embrace rather than
fear disruptive technologies.”
The Economist Magazine, 16 December 2000
Horizontal Internet Service
Business Model
Applications
(Portals, E-Commerce,
E-Tainment, Media)

Appl Infrastructure Services


AIP (Distribution, Caching,
ISV Searching, Hosting)

Application-specific Servers
ASP (Streaming Media, Transformation)
Internet
Data Centers Application-specific
Overlay Networks
(Multicast Tunnels, Mgmt Svrcs)
ISP Internetworking
CLEC Global Packet Network (Connectivity)
Feasible Alternative: Horizontal
Competition vs. Vertical Integration
• Service Operators “own” the customer,
provide “brand”, issue/collect the bills
• Independent Backhaul Operators
• Independent Antenna Site Operators
• Independent Owners of the Spectrum
• Microscale auctions/leases of network
resources
• Emerging concept of Virtual Operators
Virtual
Operator
• Local premise owner deploys own access infra-
structure
– Better coverage/more rapid build out of network
– Deployments in airports, hotels, conference centers, office
buildings, campuses, …
• Overlay service provider (e.g., PBMS) vs.
organizational service provider (e.g., UCB IS&T)
– Single bill/settle with service participants
• Support for confederated/virtual devices
– Mini-BS for cellular/data + WLAN for high rate data
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
The “Sahara” Project
• Service
• Architecture for
• Heterogeneous
• Access,
• Resources, and
• Applications
SAHARA Assumptions
• Dynamic confederations to better share resources &
deploy access/achieve regional coverage more rapidly
• Scarce resources efficiently allocated using dynamic
“market-driven” mechanisms
• Trusted third partners manage resource marketplace
in a fair, unbiased, audited and verifiable basis
• Vertical stovepipe replaced by horizontally organized
“multi-providers,” open to increased competition and
more efficient allocation of resources
Architectural Elements
• “Open” service/resource allocation model
– Independent service creation, establishment,
placement, in overlapping domains 
– Resources, capabilities, status described/exchanged
amongst confederates, via enhanced capability
negotiation
– Allocation based on economic methods, such as
congestion pricing, dynamic marketplaces/auctions
– Trust management among participants, based on
trusted third party monitors
Architectural Elements
• Forming dynamic confederations
– Discovering potential confederates
– Establishing trust relationships
– Managing transitive trust relationships & levels
of transparency
– Not all confederates need be competitors--
heterogeneous, collocated access networks to
better support applications
Architectural Elements
• Alternative View: Service Brokering
– Dynamically construct overlays on component
services provided by underlying service
providers
• E.g., overlay network segments with desirable
performance attributes
• E.g., construct end-to-end multicast trees from
subtrees in different service provider clouds
– Redirect to alternative service instances
• E.g., choose instance based on distance, network load,
server load, trust relationships, resilience to network
failure, …
Deliverables
• Architecture and Mechanisms for
– Fine grain market-driven resource allocation
– Application awareness in decision making
• Confederations and Trust Management
– Dynamic marshalling, observation/verification of
participant behaviors, dissolution of confederations
– Mechanisms to “audit” third party resource allocations,
insuring fairness and freedom from bias in operation
• New Handoff Concepts Based on Redirection
– Not just network handoff for lower cost access
– Also alternative service provider to balance loads
Research Methodology
Analyze & Design

Evaluate Prototype

• Evaluate existing system to discover bottlenecks


• Analyze alternatives to select among approaches
• Prototype selected alternatives to understand
implementation complexities
• Repeat
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
Initial Investigations
• Congestion-Based Pricing
– Economics-based resource allocation
• Clearinghouse Architecture
– Trusted Resource Mediators
– Measurement-based Admission Control with
traffic policing
• Service Composition
– Achieving performance, reliability from multiple
placed service instances
Congestion-Based Pricing
• Hypothesis: Dynamic pricing influences
user behavior
– E.g., shorten/defer call sessions;
accept lower audio/video QoS
• Critical resource reaches congestion levels,
modify prices to drive utilization back to
“acceptable” levels
– E.g., available bandwidth, time slots, number of
simultaneous sessions
Computer Telephony Services
(CTS) Testbed

Internet Internet-to- PSTN


PSTN Gateways

• E.g., Dialpad.com & Net-to-Phone


• Gateways as bottlenecks (limited PSTN access lines)
• Use congestion pricing (CP) to entice users to
– Talk shorter
– Talk later
– Accept lower quality
Berkeley User Study

• Goal: determine effectiveness of CP


• Figure of merits
– Maximize utilization (service not idling)
– Reduce provisioning
– Reduce congestion (reduced blocking probability)
• Users acceptance/reactions to CP
– Talk shorter
– Wait
– Defer talk at another time
– Use alternative access device
– Use reduced connection qualities
Experiments

• Vary Price, Quality, Interval of Price Changes


• Experiments
– Congestion pricing: rate depends on current load
– Flat rate pricing: same rate all the time
– Time-of-day pricing: higher rate during peak-hours
– Call-duration pricing: higher rate for long duration calls
– Access-device pricing: higher rate for using a phone
instead of a computer
Experimental Setup & Limitations

• Computers vs. phones to make/receive free phone calls


• Different pricing policies: 1000 tokens/week
• RT pricing, connection quality & accounting information
Flat Rate Versus Time-of-day
Peak hours from Flat Rate Pricing: Calling Pattern in Minutes

7-11pm 160
140

Number of Minutes
120
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Peak shifted! Time of Day (Hour)

High bursts right Time of Day Pricing: Calling Pattern in Minutes

before & right after 160

peak hours 140


Number of Minutes

120
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Time of Day (Hour)
Initial Results
• Call-duration pricing
– Hypothesis: Less long duration calls & more short duration
calls
– Result: fewer long duration calls, but no increase in short
duration calls
• Congestion pricing
– Congestion: two or more simultaneous users
– Hypothesis: Talk less when encounter CP
– Result: Each user used service for 8.44 minutes (standard
error 11.3) more. Observed reduction in call session when CP
encountered: 2.31 minutes (2.68) less.
– Not statistically significant (t-test)
– Not enough users to cause much congestion
Preliminary Findings
• Feasible to implement/use CP in real system
• Pricing better utilizes existing resources, reduces
congestion
• CP is better than other pricing policies
• Based on surveys, users prefer CP to flat rate
pricing if its average rate is lower
– Service providers can better utilize existing resources
by providing users with incentives to use CP
• Limitations
– Too few users
– Only apply to telecommunication services
Clearinghouse
H.323
Gateway
PSTN
Web surfing, emails,
IP Based TCP connections
Core

GSM
VoIP
Wireless (e.g. Netmeeting)
Phones Video conferencing,
Distance learning

Vision: data, multimedia (video, voice, etc.) and


mobile applications over one IP-network
Question: How to regulate resource allocation within
and across multiple domains in a scalable
manner to achieve end-to-end QoS?
Clearinghouse Goals
• Design/build distributed control architecture for
scalable resource provisioning
– Predictive reservations across multiple domains
– Admission control & traffic policing at edge
• Demonstrate architecture’s properties and performance
– Achieve adequate performance w/o edge per-flow state
– Robust against traffic fluctuations and misbehaving flows
• Prototype proposed mechanisms
– Min edge router overhead for scalability/ease of deployment
Clearinghouse Architecture
• Clearinghouse distributed architecture--
each CH-node serves as a resource manager
• Functionalities
– Monitors network performance on ingress &
egress links
– Estimates traffic demand distributions
– Adapts trunk/aggregate reservations within &
across domains based on traffic statistics
– Performs admission control based on estimated
traffic matrix
– Coordinates traffic policing at ingress & egress
points for detecting misbehaving flows
Multiple-ISP Scenario
Host

Ingress Router ISP m


ER Host

ER IR

ISP 1 ISP n

ISP 2
Egress Router
IR
• Hybrid of flat and hierarchical structures
– Local hierarchy within large ISPs
• Distribute network state to various CH-nodes and reduces
the amount of state information maintained
– Flat structure for peer-to-peer relationships across
independent ISPs
Illustration
Host LD1

CHo Edge CHo


Router

ISP1 LD0

LD0
CH1

• A hierarchy of Logical domains (LDs)


– e.g., LD0 can be a POP or a group of neighboring POPs

• A CH-node is associated with each LD


– Maintains resource allocations between ingress-egress pairs
– Estimates traffic demand distributions & updates parent CH-nodes
Illustration
Host Host
LD1

ISP n
CHo Edge CHo
Router
ISP m
ISP1 LD0 CH1

LD0 Peer-Peer
CH1
CH1

• Parent CH-node
– Adapt trunk reservations across LDs for aggregate traffic
within ISP
• Appears flat at the top level
– Coordinate peer-to-peer trunk reservations across multiple ISPs
Key Design Decisions
• Service model: ingress/egress routers as endpoints
– IE-Pipe(s,d) = aggregate traffic entering an ISP domain at
IR-s, and exits at ER-d
• Reservations set-up for aggregated flows on intra-
and inter-domain links
– Adapt dynamically to track traffic fluctuation
– Core routers stateless; edge maintain aggregate states
• Traffic monitoring, admission control, traffic policing
for individual flows performed at the edge
– Access routers have smaller routing tables; experience lower
aggregation of traffic relative to backbone routers
– Most congestion (packet loss/delay) happens at edges
Traffic-Matrix Admission Control
Host Network A • Mods to edge routers
Rnew
– Traffic monitors passively
IR-s measure aggregate rate of
Accept existing flows, M(s,d)
or Reject – IR-s forwards control
messages
POP 1 (Request/Accept/Reject)
between CH and host/proxy
CH
– Estimate traffic demand
POP 2 distributions, D(s,:), and
report to the CH

• CH
– Leverages knowledge of
ER-d topology and traffic matrix to
B
make admission decisions
Host Network
Traffic Monitor
Group Policing for Malicious
Flow Detection
A
Request x y

Accept x a
IR-s TBF for group-x
(with Fid) x b

POP 1 Traffic Policer at IR-s aggregate flows


CH based on FidIn for group policing

POP 2 Update
x y
TBFs
t y

w y TBF for group-y


ER-d Traffic Policer at ER-d aggregate flows
B
Host Network based on FidEg for group policing
TBF Traffic Policer **Traffic
TrafficPolicer
Policerat
atIR
IRor
orER
ERonly
onlymaintains
maintains
• CH assigns Fid if the flow is total
totalallocated
allocatedbandwidth
bandwidthtotothe
thegroup
group
(aggregate state) and not per-flow
admitted (aggregate state) and not per-flow
reservation
reservationstatus
status
– Let FidIn = x, FidEg = y
Service Composition

• Assumptions
– Providers deploy services throughout network
– Portals constructed via service composition
• Quickly enable new functionality on new devices
• Possibly through SLAs
– Code is initially non-mobile
• Service placement managed: fixed locations, evolves slowly
– New services created via composition
• Across service providers in wide-area: service-level path
Service Composition
Video-on-demand Cellular
Provider A server Phone

Provider A
Provider R
Provider B Replicated
Text
to
instances
Transcoder speech

Thin Provider B Email


repository
Client
Provider Q
Architecture for Service
Composition and Management

Application Composed services


plane Service
Service-level location
path creation Network
Logical Peering relations,
performance
Overlay network
platform Detection
Handling failures
Recovery
Hardware Service clusters
platform
Internet
Source

Architecture

Destination
Peering:
monitoring
& cascading
Service cluster: compute
cluster capable of running
Composed services
Application
plane services
• Overlay nodes are clusters
Logical Peering relations, – Compute platform
platform Overlay network – Hierarchical monitoring

Service
– Overlay network provides
Hardware
platform clusters context for service-level path
creation & failure handling
Service-Level Path Creation
• Connection-oriented network
– Explicit session setup plus state at intermediate nodes
– Connection-less protocol for connection setup
• Three levels of information exchange
– Network path liveness
• Low overhead, but very frequent
– Performance Metrics: latency/bandwidth
• Higher overhead, not so frequent
• Bandwidth changes only once in several minutes
• Latency changes appreciably only once an hour
– Information about service location in clusters
• Bulky, but does not change very often
• Also use independent service location mechanism
Service-Level Path Creation
• Link-state algorithm for info exchange
– Reduced measurement overhead: finer time-scales
– Service-level path created at entry node
– Allows all-pair-shortest-path calculation in the graph
– Path caching
• Remember what previous clients used
• Another use of clusters
– Dynamic path optimization
• Since session-transfer is a first-order feature
• First path created need not be optimal
Session Recovery: Design Tradeoffs
• End-to-end:
– Pre-establishment possible
– But, failure information
has to propagate
– Performance of alternate
path could have changed
• Local-link:
Finding
– No need for information to
entry/exit propagate
Service
Service-level location – But, additional overhead
path creation Network
Overlay n/w performance
Detection
Handling failures
Recovery
The Overlay Topology:
Design Factors
• How many nodes?
– Large number of nodes implies reduced latency overhead
– But scaling concerns
• Where to place nodes?
– Close to edges so that hosts have points of entry and
exit close to them
– Close to backbone to take advantage of good connectivity
• Who to peer with?
– Nature of connectivity
– Least sharing of physical links among overlay links
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
Testbeds at Different Scale
• Room-scale
– Bluetooth devices working as ensembles, cooperatively
sharing bandwidth within microcell
– Inherent trust, but finer grained intelligent and active
allocation as opposed to etiquette rules
– How lightweight? Too heavyweight for Bluetooth?
•  Building-scale
– Multiple wireless LAN “operators” in building
– Experiment with “evil operators”; third party audit
mechanisms to determine offender
– GoN offers alternative telephony, dynamic allocation of
frequencies/time slots to competing/confederating providers
Testbeds at Different Scale
• Campus-scale
– Departmental WLAN service providers with
overlapping coverage out of doors
• Regional-scale
– Possible collaborations with AT&T Wireless
(NTTDoCoMo), PBMS, Sprint?
Presentation Outline
• Motivation
• Project SAHARA
• Initial Investigations
• Testbeds
• Summary and Conclusions
Summary
• Congestion Pricing, Clearinghouse, Service
Composition first attempts at service architecture
components
• Next steps
– Generalization to multiple service providers
– Introduction of market-based mechanisms: congestion
pricing, auctions
– Composition across confederated service providers
– Trust management infrastructure
– Understand peer-to-peer confederation formation vs.
hierarchical overlay brokering
Conclusions
• Support for multiple service providers
needed to be retrofitted to original
Internet architecture
• Telephony architecture better developed
model of multiple service providers &
peering, but with longer-lived agreements,
fewer providers
• Need for support in a more dynamic
environment, with larger numbers of
service providers and/or service instances

You might also like