Professional Documents
Culture Documents
MCN D3.2 Final PDF
MCN D3.2 Final PDF
Workpackage: WP3
Dissemination level: RE
Status: Draft
Version: 1.0
List of reviewers:
Santiago Ruiz (STT)
Florian Antonescu (SAP)
Paolo M. Comi (ITALTEL)
In general the technologies demonstrate here were evaluated, implemented and enhanced using an Agile
methodology. WP3 uses a Scrum based process in which Sprints are organized. Please note that some
services – such as the Load Balancer – are built upon existing technologies and work carried out is
mostly about evaluating the technologies. Other services build upon existing technologies and have been
enhanced in the past Sprints – such as the DNS service. Others represent new developments such as the
work done on the CC. Each section will detail what are new developments and what are enhancements.
Next to that some services are not part of this M18 prototype deliverable. Such as the Analytics services
which will be enhanced over the next Sprints. Wherever possible the solutions presented in this
deliverable were also deployed on the testbeds to verify their functioning.
2.1.7 Roadmap
The following sprints as defined in (DNSAAS 2014) and will focus on delivering the following feautes
up to M27:
Implement framework for DNSaaS performance metrics.
Implementation of scaling algorithms for vertical and horizontal scaling.
Definition of templating mechanisms with OpenStack Heat.
Implement support for Name Authority Pointer (NAPTR) records.
Add support for Designate API V2.
Implement Load Balancing for DNS.
2.2.7 Roadmap
Currently not further activities are planned for the next sprints. Inputs from services on requirements for
LB will however be accounted for if possible.
Neutron API
Neutron
OVSDB / OpenFlow
OpenFlow Devices
Note that the official support of OpenDaylight in OpenStack was only introduced in the OpenStack
Icehouse first release (17 April 2014).
2.3.7 Roadmap
The following sprints as defined in (TNET 2014) with respect to the intra-DC connectivity will focus
on:
Support monitoring functionalities for network service
o Provide network monitoring (as described in (D3.1 2013 p. 1)) from OpenDaylight –
main goal is to contribute to the OpenDaylight project.
Provide QoS features to network services
o Support of QoS on OpenStack Neutron (e.g. dedicated bandwidth between two virtual
machines) - main goal is to foster and contribute to the official support.
o Support of QoS on OpenDaylight Neutron application - main goal is to foster and
contribute to the official support.
Definition of Models for SFC
o Support of chaining on OpenStack Neutron - main goal is to foster and contribute to the
official support.
o Support of chaining on OpenDaylight Neutron application - main goal is to foster and
contribute to the official support.
These sprints are expected to present progress results in the upcoming deliverable, on month 27.
It should be noted that while rate-limit and DSCP parameters corresponds to real configuration that can
be directly translated to the configuration of the physical devices, this is not valid for parameters like
delay and jitter. However, their indication on the user side can lead to other types of resource allocation
or configuration decisions at the orchestration level (e.g. VMs to be interconnected with low delay can
2.4.7 Roadmap
Primarily, the service should support two use cases that are already outlined in the subsection 2.4.1. For
the coming months up to M27 the sprint will therefore focus on:
Traffic offloading using the Re-direction service and EPC
Implementation of the Follow-Me Cloud.
2.5 Performance
The following sections describe work done on the topic of Performance in the Infrastructure
Foundations. The work described has been carried out by Task 3.2.
IV
Identify metrics Identify metrics Generate tests for identified
from diagram from diagram parameters
edges edges
dstat
Identify inter-
service QoS Ceilometer
metrics vmstat profiler
VI V
Process test data Execute tests
spreadsheet
Logstash scripts
Kibana Configure Slave
Run tests
VMs for
Run analytics periodically
automation
Prepare test on data
output data for
processing
cron Jenkins
nmon2rrd
Jenkins
Graph data
spreadsheet
Identify when
Identify when
inter-service
SIC QoS has
QoS has been
been breached
breached
Having identified the test case scenarios, tests could be written programmatically to target the virtualised
services using their APIs. Alternative test strategies to execute the test case scenarios could involve
using profilers inherent to the SIC framework, as is the case with OAI. This will give performance
metrics as to the services themselves. Additional monitoring tools should be used to log infrastructural
performance, i.e. VM CPU, RAM, network metrics.
An example of a Jenkins Job to run performance profiling on OAI using nmon to monitor system load
is;
cd ~
nmon -F ./results/nmon_rslt/dl_stats_$(date '+%Y%m%d_%H%M%S').nmon -s 3 -c 30
cd /home/openair/openair4G/openair1/SIMULATION/LTE_PHY
./dlsim -P -a -D -B25 -m16 -n3000 -s30 >
/home/openair/results/oai/phy_dl_5_m16_s30_$(date '+%Y%m%d_%H%M%S')
This starts nmon, a VM performance monitoring tool, followed by the dlsim, a performance profiler for
OAI. The combination of both tools allows for both the performance of the VM to be monitored and
logged, as well as the performance of OAI itself.
Development
2.5.7 Roadmap
The steps outlined in the Performance Testing methodology diagram have been transformed to JIRA
‘swimlanes’ within a Task 3.2 project. This offers Service Owners a project management framework,
which offers progress visibility to all partners.
The most relevant upcoming topics are summarized as followed:
Seeing as very few service owners have conducted performance testing of their SICs, these are
still marked as being outstanding issues and are due to be completed in upcoming sprints.
Performance optimisation work will follow once performance testing of SICs and Composite
Services has been conducted.
More details on the roadmap of Task 3.2 can be found in (TPERF 2014).
2.6 Monitoring-as-a-Service
The following sub-sections describe the components which enable the Monitoring capabilities. The
work presented here represent new developments and was delivered out of Task 3.3.
Development
2.6.7 Roadmap
All tasks covered in Task 3.3 have been transformed to JIRA ‘issues’ in order to follow the SCRUM
methodology as close as possible (TMAAS 2014). A project entitled TMAAS has been instantiated,
which is public to the MCN consortium:
All issues are managed in monthly sprints. Backlogs store future work items for upcoming sprints.
The most relevant upcoming topics are summarized as followed:
• Integration of MaaS with other reporting MCN Service monitoring adapter into MaaS e.g.
EPCaaS, IMSaaS and RANaaS.
• Integration of MaaS with other consuming MCN services requesting monitoring data such
as AaaS, RCBaaS and SLAaaS.
• Completing the MCN life cycle phases for MaaS.
• Validation and tests of the monitoring system.
2.7 Analytics-as-a-Service
The Analytics service is not part of the M18 prototype. Future work will include the detailed
architecture, prototype implementation and first algorithms to analyse data. This data/traces should be
collected from the services which are being cloud-enabled on the MCN architecture in the deliverables
2.8.3.2.1 Sample SO
To verify the overall architecture and integration of the SDK with the development environment shown
above a sample SO was developed. For more details on SOs see section 3.1. It deploys a simple service
described in the AWS CloudFormation template language and deploys it using the SDK. The SO itself
is deployed through the Northbound API. The sample SO exposes a simple OCCI like interface itself,
which can be accessed by the SM. It will be used to trigger the deployment, provisioning and disposal
operations. The UML class diagram in Figure 27 shows the sample SO.
Runtime
Development
In this example we have three hosts (host1, host2, host3) that are connected to a single network, each of
them with a network interface (host1_port, host2_port, host3_port). At the network level, we have
defined a QoS resource qos1, so that the maximum delay acceptable between any port attached to the
network is 4 ms (see qos_p3 resource) and the UDP traffic is limited to 1024 kbps on every port (see
qos_p1 resource and the associated classifier classifier_c2).
On the other hand, since we need a more restricted value for the maximum delay between host1 and
host2, we have defined an additional QoS resource qos2, which refers to the QoS parameter resource
qos_p2. Here the classifier classifier_c1 specifies that the QoS requirement needs to be enforced only
for the subset of traffic that is designated to network interface in host2. Finally, the QoS resource qos2
is attached to the host1_port. The resulting topology is shown in Figure 30.
STG Editor User https://git.mobile-cloud- User Manual for the Stg Editor
Manual networking.eu/cloudcontroller/stg_editor (living document)
/UserManual_v0.doc
2.9.7 Roadmap
At M18 a first working prototype will be demonstrated capable of generating actual HOT templates out
of SIC based graphs. At M21 the StgEditor is expected to be completed with a number of examples.
2.10 Database-as-a-Service
A Database-as-a-Service offer storage capabilities to SO and SIC instances. This service is built on
existing technologies and delivered out of Task 3.4.
Runtime
2.10.7 Roadmap
Only open item is to test the integration of OpenStack Trove with OpenStack heat supporting the full
orchestration which will be completed once OpenStack Icehouse is released.
RANaaS Instance
U-
BBool
A p A
Per RAT BBU A
Traffic L3 eNB MME
Generator /S-GW
A Agent
BBU Base Band Unit
DNS Domain Name System
eNB evolved NodeB
DNS L3 eNB Layer 3 eNB
MME Mobility Management Entity
RAT Radio Access Technology
S-GW Serving Gateway
Service
Orchestrator
Design RAN + EPC
<<include>>
<<extend>>
RANP + EPC Provider
File Editor
Manage User
Generator
enodeb
addressing
S1AP X2AP
sctp
gtp routing
nas
routing_gtpu routing_raw
mysql console
Application
Application GTP-U GTP-U
GPRS Tunnelling GPRS Tunnelling
Protocol Protocol
IP IP UDP UDP
Internet Protocol Internet Protocol User Datagram Protocol User Datagram Protocol
IP IP IP IP
Internet Protocol Internet Protocol Internet Protocol Internet Protocol
Service Orchestrator
Rest API
HTTP Server
configuration
L3 eNB
(Generic EPC L3-eNB
binary)
Please note whenever a relation between network function units is established (joined) or dismantled
(departed) a corresponding scripts are executed on each site of the created/deleted relation.
2.11.5.1 OpenAirInterface
OpenAirInterface (Eurecom 2013) is an open-source hardware/software development platform
developed by Eurecom as an emulator for the LTE RAN (Nikaein 2012). It combines simulation or
emulation (of the physical layer) with emulation (of MAC and higher layers) but there are also versions
allowing work with actual PHY layer equipment. Currently, Eurecom works on adding an EPC to the
emulator which shall be released still in 2014.
It has several emulators with corresponding profiling tools: dlsim and ulsim: implement only the
physical layer processing; oaisim: implements the complete stack, generating a given number of UEs
and a eNB, providing their IP address, enabling to inject traffic using traffic source generators – such as
ORG, IPERF or DITG – that is entirely processed as real equipment, the processing executing all layers
of the UE and eNB protocol stack.
OAI has been installed on several machines. In particular in CloudSigma ones, a public cloud provider
of virtual machines (VMs) supported by a shared physical infrastructure, and on a VM at the University
of Bern, where requirements towards the physical infrastructure are lower and the server has higher
specs. OAI is used to profile, for specific allocated radio resources and traffic / services usage, the
needed computation resources of the various BBU components (PHY cell, UP, CP) which satisfy the
LTE 3GPP requirements in terms of latency.
Several limitations and errors have been identified which do not allow yet the use of all OAI stated
potentialities. In particular, it does not support multi-core processing (although it is multi-thread),
strongly limiting its performance. The operation with traffic sources presents also several errors that
require the improvement of OAI. These faults are expected to be solved in a mid-term, in order to run
the intended evaluations, and enable to present final conclusions on the feasibility of eNBs running in
the cloud.
Several improvements have been done in the code, and submitted as contributions to the open source
community, in order to enable the profiling of processing resources.
OpenAir Scenario Descriptor (OSD) is a configuration dataset which is composed of four main parts,
which represent the basic description of Open Air Interface (OAI) emulation platform. It is part of OAI
emulation methodology for describing scenarios using the XML format. This allows repeatable and
controlled experimentation to be executed, without having to run simulations in the command line and
setting parameters manually. As the parameter set was rather limited, more parameters required for
experimentation works were defined and implemented and contributed back to the OAI community.
The OpenAirInterface Traffic Generator (OTG) is a tool used for the generation of realistic application
traffic for the performance evaluation of emerging networking architectures. It accounts for
Development
install
dns-reletion-joined
mme-reletion-joined
user-generator-reletion-joined
start
S1AP: S1 Setup Request
REST commands
3GPP standard
signalling
install
REST commands
enodeb-reletion-joined
start
2.11.7 Roadmap
As also detailed in (TRAN 2014) the following sprints will focus for M 21 on
Integrating the MCN framework with a RANaaS Service Manager and Service Orchestrator,
Development
3.1.7 Roadmap
The following sprint up to M21 will cover some basic changes on the generic sample SO. The
Application class will be extracted and put into the SDK. This will generalize the Interface of SO
instances. This interface will be built upon the OCCI specification. No other actions are planned.
Development
3.2.7 Roadmap
The upcoming sprints will, as defined in (SM 2014), will focus on the following user stories and deliver
this by M27 and M30.
Integration of support service that support both the technical and business service manager
Separation of SM into BSM and TSM components
Implement the BSM to BSM components to support inter-SM communications
Implement asynchronous request processing to improve perceived responsiveness of SM
Extend the administration capabilities of the SM (e.g. remote uploads of service bundles)
D3.1. (2013) Infrastructure Management Foundations – Specifications & Design for MobileCloud
framework, MobileCloud Networking Project
Dimitrova, D. (2014) Performance analysis of eNodeB for porting to the cloud, MobileCloud
Networking, https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-UBern-
D3.2_performance.docx
ETSI. (2013) ETSI GS NFV 002: Network Functions Virtualisation (NFV): Architectural Framework,
http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_NFV002v010101p.pdf
Ferreira, L., Branco, M., and Correia, L. M. (2013a) Traffic Generation, MobileCloud Networking,
https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-091-
01-Traffic_Generation_for_D3.2.docx
Ferreira, L., Branco, M., and Correia, L. M. (2013b) Radio and Cloud Resources Management,
MobileCloud Networking, https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-090-
02-RCRM_for_D3.2.docx
Nyren, R., Edmonds, A., Alexander Papaspyrou, and Metsch, T. (2011) Open Cloud Computing
Interface - Core, Open Grid Forum, http://ogf.org/documents/GFD.183.pdf
Van Rossum, G., Warsaw, B., and Coghlan, N. (2001) Style Guide for Python Code, Python Software
Foundation, http://legacy.python.org/dev/peps/pep-0008/
Sousa, B. (2014) Towards a High Performance DNSaaS Deployment, Presented at the 6th
International Conference on Mobile Networks and Management