Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

Network Call Performance Insight

BVS Department
Summary

• ISUP/BICC introduction
• Insight introduction
• Monitored Protocols - Interfaces
• Dimensions and KPIs
• Materialized Views
• Description of available classes
• Standard Reports
• Workflows
• Report Optimization

2
ISUP/BICC Introduction –ISUP-

ISDN user part (ISUP)


The ISDN user part (ISUP) covers the signaling functions for the
control of calls, for the processing of services and features and for
the administration of circuits in ISDN. The ISUP has interfaces to
the message transfer part and the signaling connection control part
(SCCP) for the transport of message signal units. The ISUP can
use SCCP functions for end-to-end signaling.
The structure of the ISUP Message is shown Below :

3
ISUP/BICC Introduction –ISUP-

• The routing label comprises the destination point code, the


originating point code and the signaling link selection.

• The circuit identification code (CIC) assigns the message to a


specific circuit. A circuit identification code is permanently assigned
to each circuit.

• The message type defines the function and the format of an ISUP
message. There are different message types for the call set-up, the
call release and the administration of circuits.

4
ISUP/BICC Introduction –ISUP-

• The fixed mandatory part of the ISUP message contains


parameters which must be present for a certain message type and
which have a fixed length. For the IAM these are, for example,
parameters for:

• the type of connection (e.g. connection via a satellite link)


• the requirements for the transmission link (e.g. 64 kbit/s end-to-end)
• the requirements for the signaling system (e.g. ISUP end-to-end)
• the type of the calling party (ISDN subscriber = normal subscriber)

• The variable mandatory part of the ISUP message contains


parameters of variable length. An example of one such parameter
for the IAM is: the directory number or at least part of the number
which is required for routing to the terminating network node

5
ISUP/BICC Introduction –BICC-

• The Bearer-Independent Call Control (BICC) is a signaling protocol


based on N-ISUP that is used for supporting narrowband ISDN
service over a broadband backbone network. BICC is designed to
interwork with existing transport technologies. BICC is specified in
ITU-T rec. Q.1901.

• BICC signaling messages are nearly identical to those in ISDN


User Part (ISUP); the main difference being that the narrowband
circuit identification code (CIC) has been modified. The BICC
architecture consists of interconnected serving nodes that provide
the call service function and the bearer control function.

• The Third-Generation Partnership Project (3GPP) has included


BICC CS 2 in the Universal Mobile Telecommunications System
(UMTS) release 4.

6
MasterClaw Architecture
Competitive Next Generation Architecture
• Integrated application suit
Presentation and

• Personalized Web interface


Reporting

• Real-time and historical


• Seamless drilldown
capabilities
Data Processing

KPI/KQI • End-to-end and cross domain


xDRs
correlation
• Powerful data warehouse
External OSS • Open interfaces for OSS
Systems
integration
WAN/LAN
Data Acquisition

• Any network
• Gathering and storage of
data
• First level data correlation
and payload analysis

7
Anritsu Confidential Information
Monitored Protocols - Interfaces

8
Monitored Protocols - Interfaces

9
Network Call Performance DWH
• Network Call Performance Insight focuses on the ISUP and BICC part of
the network. It provides a lot of useful call handling information

• BICC support recently included to be compatible with Release 4 mobile


core architecture.

• It’s based on Data Records coming from the MasterClaw probes


monitoring on the ISUP and BICC interfaces of the operator’s network.
Both internal links and links to Interconnect Carriers can be monitored.

• Operator’s organization target:


– Network Operation and Maintenance for trouble shooting, daily reports
and general checks
– Network Planning for load and resource investigations as well as trend
analysis for traffic load
– Inter-Carrier Handling for registration of inter-connecting traffic
performance, inter carrier billing and SLA-compliance
– Marketing for user behaviour and service performance.
– The interconnect-focused scenarios are applicable in fixed as well as in
mobile networks.

10
Dimensions and KPIs
• The principal KPIs provided from this application are:
» Answer to Seizure Ratio (ASR)
» Network Efficiency Ratio (NER)
» Call setup time
» Release value distribution
» Traffic load on signaling links
» Traffic load on voice trunks
» Major Accounts Customer (MAC) performance

• The generic reports and counter aggregation focus on:


» Country
» Operator
» Called number prefix
» ISUP release cause
» Switching equipment

11
Fact and Dimension tables

• A Data Warehouse is a large database which data is organised in a


structure commonly known as a ’star schema’

• A star schema consists of one ’fact’ table and several dimension tables

• The fact table, the centre of the star schema, contains a list of the
collected events. A fact table thus potentially contains millions of rows

• A dimension table is used used to describe attributes of the given events.


As each event may be characterised in many ways, there are usually
several dimension tables (dimensions)

• All MasterClaw DWHs use these star-based data schemas

12
DWH Star Schema - Dimensions

QIDW_LINKSET_TB QIDW_SPC_TB

QIDW_PREFIX_TB

QIDW_COUNTRY_TB

QIDW_TIME_TB QIDW_RV_GROUP_TB

QIDW_CDR_FACT_TB
QIDW_NOA_TB

QIDW_RELEASE_VALUE_TB

QIDW_MAC_TB

QIDW_TRUNK_GROUP_TB

QIDW_CALL_SETUP_INT_TB QIDW_CONV_TIME_INT_TB

13
ISUP CDR generation

• One ISUP CDR is generated corresponding to any IAM message


received/sent on the monitored linksets.
• The number of Gateway MSCs (GMSC) per operator is typical two
or more, located on different sites.
• Each STP uses linkset and inter linkset load sharing at PDU level.

• MasterClaw can handle ISUP CDR generation from interconnect


signalling links with :
– Link and linkset real load sharing (load sharing at PDU level)
support.
– High load (up to 1 Erlang)
– The correlation is based on called/calling number, CIC and
OPC/DPC
• Correlation of ½ CDRs from different probes located on different
sites, without any backhauling

14
Data flow in NCKPI DWH
CDR / 5 min

Probe MC Server MC DWH Server


CDR CDR CDR.dat
Cleaner
qxdrs Acquisition ET Loader

CDR
CDR Oracle
CSDR buf ingestion_buffer dat_buffer

garbage_buffer

CDB

MC-Central Server

15
Data flow in NCKPI DWH : Directories 1/2

Probe MC Server
CDR

qxdrs

$QUEST7_ROOT/data/xdr/CDR_buf/
CDR $QUEST7_ROOT/data/xdr/CDR/
CDR
CSDR buf

16
Data flow in NCKPI DWH : Directories 2/2

MC DWH Server
CDR CDR.dat
Cleaner
Acquisition ET Loader

ingestion_buffer dat_buffer Oracle

garbage_buffer

$QUEST7_ROOT/data/qidwh/etl/ingestion_buffer
$QUEST7_ROOT/data/qidwh/etl/garbage_buffer
$QUEST7_ROOT/data/qidwh/etl/dat_buffer

17
Data flow in NCKPI DWH : Update Dimensions

MC DWH Server
CDR CDR.dat
qidwh updateAlarmDim Cleaner
ET Loader

Dataminer or
qidwh updateDim or ingestion_buffer dat_buffer Oracle
qidwh updateCDBDim or
qidw start Sync garbage_buffer

CDB

$QUEST7_ROOT/data/qidwh/dim/
MC-Central Server

18
Services Description

• Acquisition: responsible for transferring xDR files from the generating


remote machine and to place these files into the ’ingestion_buffer’

• ET: responsible for taking the xDR from the ’ingestion_buffer’, transform
them and put the resulting dat files inside the ’dat_buffer’

• Loader: responsible for managing the data of the DWH:


» Loading data (from the .dat file in the ’dat_buffer’) into the Oracle DBMS
» Cleaning of old data from the database (according to the on-line policy)

• Sync: responsible for synchronizing the CDB with certain dimensions in


the DWH

• Alarm-Server: generates the Alarms according to the Alarm configuration

• KPI-Provider: evaluates the KPIs in order to generate the Dashboards

19
NCKPIDWH – VIP Analysis
Given the multitude of data available, NCKPI turns that to valuable
information for understanding the customer experience

Proactive customer experience


management

Reduces time spent on VIP issues


KEY
SLA agreement can be used to ACCOUNTS
demonstrate quality promise
individual
Helps organization become more MSISDNs

customer centric
VIP
INDIVIDUALS

20
Anritsu Confidential Information
Description of available classes

• Time
• Measures
• Trunk Group classes
• Conversation Time
• Call Setup Time
• Prefix
• Called Number
• Calling Number
• PointCode
• First OPC/DPC
• Last OPC/DPC
• Release Value Group
• Country
• Nature of Address
• Linksets
• Major Account Customer (MAC)

21
Time

• TIME contains the dimensions


(e.g. Year, Quarter, Month, ..etc)
introduced in order to group the
data by the time dimension

• The lower aggregation time


dimension is 5 minutes

22
Time – Predefined Filters

• Hour between – prompts a window in which


you should insert the time interval for the report
• Hour equal – you should insert the hour
• Last 12 months – the report will extract the data
for the previous 12 months (e.g. if today’s date is
August 10th 2007, the report will retrieve the
data from August 1st 2006 to July 31st 2007)
• Last 3 months – the report will extract the data
for the previous 3 months (e.g. if today’s date is
August 10th 2007, the report will retrieve the
data from May 1st 2007 to July 31st 2007)
• Last month – the report will extract the data for
the previous month (e.g. if today’s date is
August 10th 2007, the report will retrieve the
data from July 1st 2007 to July 31st 2007)

• Last 7 days – the report will extract the data for the previous 7 days (e.g. if today’s date is August 10th
2007, the report will retrieve the data from August 3rd 2007 to August 9th 2007)
• Yesterday – the report will extract the data for the previous day (e.g. if today’s date is August 10th
2007, the report will retrieve the data from August 9th 2007 00:00 to August 10th 2007 00:00)
• Last hour loaded – the report will extract the data belonging to last aggregated hour in the database
• Last 15 minutes loaded – the report will extract the data for the last 15 minutes loaded
• Last 5 minutes loaded – the report will extract the data for the last 5 minutes loaded

23
Measures 1/2
• Transactions – the number of ISUP transactions. Type of
aggregation: SUM.

• ASR – the ASR (Answer Seizure Ratio) is defined as:


ASR = (Seizures resulting in answer signal) / Total seizures
X100

• ACR – the ACR (Answer Call Ratio) is calculated as:


ACR = (Calls resulting in answer signal) / Total calls X100

• NER – Network Efficiency Ratio is defined as:


NER = {Seizures Resulting in Answer Signal
+ User Busy (RV = 17)
+ RingNoAnswer (RV = 16, 18, 19)
+ Terminal Rejects/unavailability (RV = 21, 27)
} x 100/Seizures
• ABR – the ABR (Answer Bid Ratio) is defined as:
ABR = Bids resulting in answer signal /Total bids ×100

• Call Setup Time – calculated as the time from the last SAM
to ACM. Type of aggregation: SUM, CNT

• Hold Time – the time between the IAM and RLC messages.
Type of aggregation: SUM, CNT

• Conversation Time – is the time between ANM and


REL/RLC messages.
Type of aggregation: SUM, CNT

24
Bid, Seizures, Attempts

BID: An attempt to obtain a circuit in a circuit group or to a destination


Seizure: A Seizure is a bid which succeeds in obtaining a circuit

PC1 PC2 PC3

bid bid

dialog 1 dialog 2

realese value 1 realese value 2

Attempt

25
Measures
IAM

SAM
Response Time
(IAM to ACM)
ACM
Waiting Time Holding Time
ANM
(ACM to ANM/REL)
(IAM to RLC)

Conversation time
REL (ANM to REL/RLC)
RLC

Conversation Time: is the time between ANM and REL/RLC messages

Holding Time: is the time between IAM and RLC messages:

Conversation Time = Holding Time – Waiting Time – Response Time

26
Measures 2/2

• Traffic Load – is defined as follows:


Traffic Load = ΣHoldingTime/ Number_CIC
Type of aggregation: SUM, CNT.
• Answered – the number of sequences containing ANM.
Type of aggregation: SUM.
• Not Answered – the number of unanswered calls.
Type of aggregation: SUM.
• ABR Dialogues: Number of dialogues (CT > 0 & RV in
ABR release value group.
Type of aggregation: SUM
• ASR Dialogues – Number of dialogues with CT > 0 & RV
in ASR release value group.
Type of aggregation: SUM.
• NER Dialogues – Number of dialogues with CT > 0 and
RV in NER release value group.
Type of aggregation: SUM.
• Seizures – the number of allocated resources.
Type of aggregation: SUM
• NER 2002 Dialogues (Network Efficiency Ratio)
NER(02) = Answered Calls + No Answered calls (with RC
1, 16, 17, 18,19, 20, 21, 22, 28, 50, 53, 55, 57, 87, 88, 90,
31)/Total Seizures.Type of aggregation: SUM, CNT.

27
Class - Trunk Group

The Trunk Group Analysis class contains dimension and


measures related to the trunk group.

Dimensions
• Time Selected >= Hour – the observation time that
can be of a granularity of more than one hour
• Time Selected – the observation time.
• TG Name – the Trunk Group name.
• TG Direction – specifies if the trunk group has a
forward or backward
• E1 Name – the logical name of the E1 link
• E1 CICs Number – provides the number of voice
circuits in the E1 link
• TG Originating PC subclass OTG PC
Network&Code: is the originating point code of the
trunk group
• TG Destination PC subclass DTG PC
Network&Code: is the terminating point code of the
trunk group

Measure’s formulas can be retrieved from the general one


with except of
• GOS – the GoS (Grade of Service) is the probability
of a call in a trunk group being blocked.

28
GOS

•The GOS (Grade of Services) is calculated with the


Erlang B formula:

where
A = Traffic
B = Probability of blocking
N = Number of CICs

29
Class – CTI and CSI

• CSI – Call Setup Time Interval


This class is introduced in order to have the setup time interval
distribution.
The dimension is composed by :
CT Name subclasse – the name of the interval is in the form: i-
[lower value, upper value]

• CTI – Conversation Time Interval


This class is introduced in order to have the conversation time
interval distribution.
The dimension is composed by:
CTI Name – the name of the interval is specified in the form:
i-[lower value, upper value]

30
Class - Prefix

Subclass are Calling & Called Prefix and contain:


• Prefix Name – the name of the prefix (e.g. Sonofon
GSM)
• Prefix Description – a description of the prefix
• Prefix Number – the prefix number from the called
number in the CDR(e.g. 4522)
• Operator Name – the name of the operator that has
the specific prefix (Sonofon GSM). For billing
verification, it is mandatory to fill in this field with the
name of an operator.
• Operator Type – the type of operator. Possible values
are:L (local),N (national),I (international).
If billing verification is used, it is mandatory to configure
this field.
• Operator Group Name – the name of the group the
operator belongs to (if any) (e.g. Telenor).
• Country Code – the country code of the operator.
• Country Code Name – the country name (Italy).
• Region Name – it is possible to group the prefix
belonging to a region.

31
Class – Point Code
The Point Code class is composed of three subclasses:
• First Originating Point Code (FOPC)
• First Destination Point Code (FDPC)
• Last Destination Point Code (LDPC)
Dimensions FOPC, FDPC, LDPC (xy can be Last/First OPC/DPC)
• xyPC Network & Code identifies the first/last originating
point code within the network
– xyPC Format 3-8-3 – code written in 3-8-3 format
– xyPC Format 8-6 – code written in 8-6 format
– xOPC Number – code written in 14 bit format (e.g. 200)
– xOPC Dec – code written in decimal format (e.g. 512)
• xyPC Name: First/Last Originating Point Code logical name
– xyPC description – the description of the Point Code.
• xyPC ICP Name: interconnected partner name.
• xyPC Country Code – phone number country code of the
interconnected partner.
• xyPC Country Name – the country name
• xyPC Group Name – the logical name of the group

32
Class- Release Value Group (RVG)

Group of release values (RV) in order to perform any type


of calculation.

Dimensions

• RVG Name – Release Value Group Name (e.g.


NER)
– RVG Description – description of the
release value group (e.g. Used for
NER calculation).
Release Value subclass
Dimensions
• RV Name – the name of the release value
associated with the code (e.g.No circuit channel
available).
– RV Short Name – a short name for the
release value.
• RV Code – the code of the release value (e.g.
16). This is the code that can be seen in the
traffic. A list of codes can be found in the ISUP
specification (Q.850/2.2.5).

33
Class - Country

• The Country class (Fig. 4.16) contains


two subclasses:
– Calling Country
– Called Country

Dimensions
• Calling/Called Country Name – the name
of the calling/called country (e.g.
Emirates).
• Calling/Called Country Code – the code
associated with the calling/called country
(e.g. 971).

34
Class – Nature of Address

• The Nature of Address class (Fig. 4.17)


contains two subclasses:
• Calling NoA
• Called NoA

• They describe the calling/called party’s


address, e.g. ISDN international number,
ISDN subscriber number, etc.

Dimensions
• Calling/Called NoA Description: a
description of the NoA.
• Calling/Called NoA Code: identification
code associated with the NoA.

35
Class - Linksets
The Linkset class is composed of subclasses:
• First Forward Lks (FF)
• Last Forward Lks (LF)
• First Backward Lks (FB)
• Last Backward Lks (LB)

Dimensions
• FF Linkset Name: logical name of the linkset where
the oldest IAM has been captured.
• LF Linkset Name: logical name of the linkset where
the youngest IAM has been captured.
• FB Linkset Name: logical name of the linkset, where
the messages belonging to that same dialogue of the
oldest IAM and having opposite direction, have been
captured.
• LB Linkset Name – the logical name of the linkset,
where the messages belonging to the same dialogue
of the youngest IAM and having opposite direction,
have been captured.

Each dimension has two details:


• Linkset Description – description of the linkset
• Linkset Identifier – contains an identification code for
the linkset

36
Class - Major Account Customer (MAC)

Major Account Customer class is optional.

Dimensions
• MAC Group Name – the name of the group
- MAC Group Description
• MAC Subgroup Name – the name of the
subgroup for each group
• MAC Name – the name of the customer that
belongs to the MAC group
- MAC Contract Id – identifier of the contract
owned by the customer
- MAC Description – a description of the
customer (e.g. the customer's
role)
• MAC IMSI – IMSI of the user (e.g. 22201 for
Mario Rossi)
• MAC MSISDN – ISDN Number of the Mobile
Station

37
Call Data Record Details

The Call Data Record Details class provides all


details related to an ISUP transaction.

Note: Every time objects belonging to Call Data


Record Details are included in a report, the
performance will be very low since the fact table
is used by Oracle.
This class must be used only to extract
some data related to a short time period and
by means of detailed filters.

38
Standard Reports

Folder
Insight document Description
name

Inter-connect call Analysis of outbound (by called prefix and LDPC) and inbound (by
performance called prefix and FOPC) interconnect partner
Partners
Linkset Analysis in terms of circuit utilization and network
Linkset Analysis
performance

Release Value
Distribution of selected release value groups for selected prefixes
Analysis
Traffic Measurement Traffic Measurement Analysis in terms of network performance
Network
Analysis (option) and traffic load
Weekly Peak Traffic
Provides the Weekly Peak Traffic analysis per Trunk Group
(option)
List All Dimensions in List all dimensions in Network Call KPI: Release Value Group,
Network Call KPI Point Code, Linkset and Prefix
Others
Defined Trunk Group
List of defined trunk groups
(option)

39
Partners
Interconnect Call Performance
• This document provides an analysis of outbound inter-connect
partner (filtered out by Called Prefix and LDPC) and inbound inter-
connect partner (filtered out by Called Prefix and FOPC)

• Input parameters are


– Start Time
– End Time
– Called Prefix Name

• Reports in the document are


– Network Quality Overview by Destination
– Outbound Inter-connect Routes QoS KPI
– Inbound Inter-connect Routes QoS KPI
– Outbound Inter-connect Routes Traffic Load
– Inbound Inter-connect Routes Traffic Load

40
Network Quality Overview by Destination
•This report shows an overview of the main KPIs as ACR,NER 2002, NER
and Call Setup Time (all KPIs are shown with the relevant #Attempts) per
Called Prefix

•The report is composed of four objects:


–Worst 5 Called Prefix Names per ACR
–Worst 5 Called Prefix Names per NER 2002
–Worst 5 Called Prefix Names per NER
–Worst 5 Called Prefix Names for Average Call Setup Time

41
Outbound Inter-connect Routes QoS KPI

•This report shows the QoS


KPIs; ACR, NER 2002, NER,
Call Setup Time and
Conversation Time (all KPIs are
shown together with # Attempts)
aggregated by LDPC (Last
Destination Point Code) and
Called Prefix

•The report is composed of two


objects:
–Worst 10 Called Destinations
per Answered Calls
–Called Destination

42
Outbound Inter-connect Routes QoS KPI

•This report shows the QoS KPIs;


ACR, NER 2002, NER, Call
Setup Time and Conversation
Time (all KPIs are shown
together with # Attempts)
aggregated by LDPC (Last
Destination Point Code) and
Called Prefix

•The report is composed of two


objects:
–Worst 10 Called Destinations per
Answered Calls
–Called Destination

43
Inbound Inter-connect Routes QoS KPI

•This report shows the QoS KPIs;


ACR,NER 2002, NER, Call Setup Time
and Conversation Time (all KPIs are
shown together with the number of
Attempts) aggregated by FOPC (First
Originating Point Code) and Called Prefix

•The report is composed of two objects:


–Worst 10 Called Destinations per
Answered Calls
–Called Destination

44
Outbound Inter-connect Routes Traffic Load

•This report shows the Traffic Load


aggregated by LDPC (Last
Destination Point Code) and the
distribution, for each LDPC, of
Traffic Load and #Attempts over
Called Prefix Names

•The report is composed of two


objects:
–Top 10 LDPCs per Traffic Load
–Last DPC

45
Inbound Inter-connect Routes Traffic Load

•This report shows the Traffic Load


aggregated by FOPC (First
Originating Point Code) and the
distribution, for each FOPC, of
Traffic Load and #Attempts over
Called Prefix Names

•The report is composed of two


objects:
–Top 10 FOPCs per Traffic Load
–First OPC

46
Partners
Linkset Analysis
• This document provides an analysis of linksets in terms of circuit
utilization and network performance

• Input parameters are


– Start Time
– End Time
– Last Forward Linkset Name

• Reports in the document are


– Performance Overview
– Network Performance Trend
– Traffic Load Trend

47
Performance Overview
•This report shows an overview of the main KPIs as ACR,NER 2002, NER and Traffic
Load (all KPIs are shown with the relevant #Attempts) per linkset

•The report is composed of four objects:


–Worst 5 Linksets for ACR
–Worst 5 Linksets for NER 2002
–Worst 5 Linksets for NER
–Worst 5 Linksets for Traffic Load Hour
–Linkset Details

48
Network Performance Trend

•This report shows the time


trend of the main KPIs
(ACR, NER 2002, NER,
Call Setup Time) for each
selected linkset

•The report is composed of


three objects:
–Graph showing NER 2002
and NER time trend
–Graph showing ASR and
Average Call Setup Time
trend (s)
–Trend Details

49
Traffic Load Trend

•This report shows the time


trend of the Traffic Load
and Conversation Time for
each selected linkset

•The report is composed of


two objects:
–Graph showing Traffic Load
(Hour) and Conversation
Time (s) time trend for the
selected linkset.
–Trend Details

50
Network
Release Value Analysis
• This document provides an analysis of linksets in terms of circuit
utilization and network performance

• Input parameters are


– Start Time
– End Time
– Last Destination Point Code

• Reports in the document are


– Release Value Group Distribution
– E422 Release Value Group

51
Release Value Group Distribution

•This report shows the distribution of


calls over all RVGs (Release Value
Group) and the related percentage
respect to the total number of calls
(%RVG)

•The report is composed of two


objects:
–Graph showing the percentage of
RVG per each RVG name
–Table showing, for each Called
Prefix, number of Attempts, RVG
Calls and percentage of RVG.

52
E422 Release Value Group

•This report shows the


distribution of calls over the
defined E_422 D groups 1
to 4 and the related
percentage respect to the
total number of calls
(percentage of E_422)
•The report is composed of
two objects:
–Top 10 most loaded Routes
FOPC/LDPC for E_422 calls
–ROUTE FOPC-LDP

53
Network
Traffic Measurement Analysis (optional)
• This document provides an analysis of Traffic Load (Hour) distribution over
Trunk Groups and the time trend for each selected Trunk Group

• These reports have been developed for users that request this feature
separately

• Input parameters are


– Start Time
– End Time
– Trunk Group Name

• Reports in the document are


– Trunk Group Overview
– Trunk Group Trend Details
– E1 Overview
– E1 Trend Details

54
Trunk Group Overview

•This report performs an


analysis of the Traffic Load
on the Trunk Group
dimension. Traffic Load
(Erlang) Hour measure is
aggregated for each
selected Trunk Group
•The report is composed of
two objects:
–Top 10 Trunk Group per
Traffic Load
–Trunk Group Details

55
Trunk Group Trend Details

•This report shows the time trend


of Traffic Load, Average
Conversation Time (s), Average
Hold Time (s) number of
Attempts and Conversation Time
for each selected Trunk Group
•The report is composed of three
objects:
–Graph showing the time trend
of Traffic Load (Hour)
–Graph showing the average
Conversation Time (in seconds)
and the average Hold Time (in
seconds) time trend
–Trend details

56
E1 Overview

•This report performs


an analysis of the
Traffic Load on the E1
dimension

•The report is
composed of two
objects:
–Top 10 E1 Names per
Traffic Load
–E1 details

57
E1 Trend Details
•This report shows the time trend of;
Traffic Load, Average Conversation
Time (s), Average Hold Time (s),
number of Attempts Conversation
Time for each selected E1

•The report is composed of three


objects:
–Graph showing the trend of Traffic
Load (Hour), with relevant number
of Attempts
–Graph showing the time trend for
Average Conversation Time (s) and
Average Hold Time (s)
–A table showing Traffic Load
(Hour), Average Conversation Time
(s), Average Hold Time and number
of Attempts for the selected hours

58
Network
Weekly Peak Traffic (optional)
• This document provides the weekly peak traffic analysis per Trunk
Group

• These reports have been developed for users that request this
feature separately

• Input parameters are


– First Day of a Week
– Trunk Group Name

• The only report in the document is


– Weekly Peak Traffic per Trunk Group

59
Weekly Peak Traffic per Trunk Group

•This report shows the top ten


Trunk Groups per peak hour: we
refer to peak hour as the hour
with the highest Traffic Load
value
•A table shows the peak hour
with relevant Traffic Load value,
number of Attempts and the
direction type for each selected
Trunk Group
•The report is composed of two
objects:
–Top 10 Trunk Group per Peak
Hour
–Trunk Group Details

60
Others
List All Dimensions in Network Call KPI
• This document collects a set of tables showing the current
configuration for all dimensions

• No Input parameter is requires

• Reports in the document are


– Release Value Group Definition
– Point Code Group
– Point Code by Country
– Point Code by Interconnect Partner
– Linkset
– Prefix

61
Others
Defined Trunk Group (optional)

• This document showing the current configuration for the


Trunk Group dimension

• These reports have been developed for users that request


this feature separately

• No Input parameter is required

• The only report in the document is


– Trunk Groups

62
Weekly Peak Traffic per Trunk Group

•This report shows the top ten


Trunk Groups per peak hour: we
refer to peak hour as the hour
with the highest Traffic Load
value
•A table shows the peak hour
with relevant Traffic Load value,
number of Attempts and the
direction type for each selected
Trunk Group
•The report is composed of two
objects:
–Top 10 Trunk Group per Peak
Hour
–Trunk Group Details

63
Workflow Definition

• We refer to workflows as the analysis conducted from


the specific needs of a given department, by means of
the reporting activity

• Each workflow offers different levels of detail,


depending on how deep is the analysis you may want
to perform

• A specific configuration for the "Partner Performance


Monitoring" workflow is required: configure Point Code
Dimension for all signalling switches associated to
specific outside operator setting ICP_PARTNER=Y

• Some parameters in /usr/quest7/nin/qidw.nin file must


be configured –see User Manual

64
Workflow Levels

• Each workflow offers different levels of detail, depending on how deep is


the analysis you may want to perform

• Workflow levels are marked inside each report with the following icons:

Extend of observation: reports marked with this icon


belong to the top level of the workflow

Impact Analysis: reports marked with this icon belong to


the middle level of the workflow

Cause Analysis: reports marked with this icon belong to


the bottom level of the workflow

• Each workflow level includes the relevant Insight Documents and related
reports intended for that level of analysis

65
Links Between Reports

• Some reports in the application are linked together. When a report


is linked to another, you will find the following icons as a guide
during your workflow navigation:

Drill Right: link from that report to another located at the


same level of the workflow

Drill Down: link from that report to another located at a


lower level of the workflow

66
Content Organization

•The following workflows are supported:


–Partner Performance Monitoring
–Linkset Monitoring

67
Partner Performance Monitoring
Content Organization
Workflow Level Insight Folder Document
Top Level Analysis –
Outside Operator
Extend of Standard Reports
Overview
observation Partners
Top Level Analysis –
Country Analysis
Partner
Impact Analysis –
Performance
Impact Analysis Outside Operator
Monitoring
Performance
Cause Analysis – Error
Standard Reports Distribution
Cause Analysis
Network Cause Analysis – CDR
Detail

68
Partner Performance Monitoring
• This workflow supports an analysis on the outside operators (a.k.a.
partners) performances, so Interconnected partners are the
interlocutors for this analysis

69
Partner Performance Monitoring

• For this workflow:


– Extend of observation. Reports contained at this level of the workflow
are meant to:
• provide an overview of call service delivered by outside operators
• provide a compared analysis of interconnect operators based on their
network efficiency
• identify which outside operator is giving a low performance
– Impact Analysis. Reports contained at this level of the workflow are
meant to:
• analyse the performance of an outside operator in terms of standard KPIs
• identify peak hours with service degradation supplied by the outside
operator
– Cause Analysis. Reports contained at this level of the workflow are
meant to:
• find errors and causes

70
Partner Performance Monitoring
Outside Operator Overview
•This report contains
four objects
–Overall trend curve
showing the daily
number of calls for all
outside operators over
the last 2 weeks

–Top 10 outside
operators who
experienced the worst
failure rate (based on
NER 2002 KPI) during
the last 2 weeks

71
Partner Performance Monitoring
Outside Operator Overview
–Top 5 countries who
registered the highest
increasing failure rate (NER
2002 based) in percentage (It
compares previous day's
failure rate with the value of
the same day in the previous
week)
–Top 5 outside operators
who registered the highest
increasing failure rate (NER
2002 based) in percentage (It
compares previous day's
failure rate with the value of
the same day in the previous
week)

72
Partner Performance Monitoring
Country Analysis
•This report contains
two objects
–For the selected
country, the failure rate
(NER 2002 based) per
outside operator over
the last 2 weeks
–Failure rate percentage
(NER 2002 based) and
number of attempts per
partner

73
Partner Performance Monitoring
Outside Operator Performance
•This report contains five
objects
–Line graph in this section
shows, for the last 2 weeks,
the worst hour per day (max
failures based on NER 2002)
and #Attempts >100
–Line graph in this section
shows for the last 2 weeks,
the worst hour per day
considering max Failure
(NER) and #Attempts >100

74
Partner Performance Monitoring
Outside Operator Performance
–Line graph in this
section shows for the
last 2 weeks, the worst
hour per day
considering max Traffic
Load and #Attempts
>100
–Line graph in this
section shows for the
last 2 weeks, the worst
hour per day
considering max Call
Setup and #Attempts
>100

75
Partner Performance Monitoring
Outside Operator Performance
–This section is composed of four tables.
•The first table shows the worst hour per day in the last 2 weeks (based on
NER 2002)
•The second table shows the worst hour per day in the last 2 weeks (based
on NER).
•The third table shows the worst hour per day in the last 2 weeks (based on
Call Hold Time sum normalized by the number of circuits)
•The fourth table shows the worst hour per day in the last 2 weeks (based on
Call Setup Time).

76
Partner Performance Monitoring
Error Distribution
•This report requires as
input the hour and the Last
DPC ICP Name

•This report contains three


objects
–The distribution of failure
causes for the selected hour
and partner

77
Partner Performance Monitoring
Error Distribution
–The bar chart in this
section shows
distribution of failure
causes for the 5 minute
interval (per partner)

–The release causes


andnumber of
occurrences per five
minutes

78
Partner Performance Monitoring
CDR Detail
•This report requires as input the 5 minutes interval, the Last DPC ICP
Name and the Release Value Code

•This report contains The table shows calling/called release causes,


originating and destination PCs for the selected 5 minute interval

79
Linkset Monitoring
Content Organization

Workflow Level Insight Folder Document


Extend of Standard Reports Top Level Analysis –
observation Partners Linkset Overview
Impact Analysis –
Impact Analysis
Linkset Linkset Performance
Monitoring Standard Reports Cause Analysis – Linkset
Network Error Distribution
Cause Analysis
Cause Analysis – CDR
Detail by Linkset

80
Linkset Monitoring

• This workflow is based on the selected internal linkset performance


monitoring so is addressed to the Planning Department network

81
Linkset Monitoring

• For this workflow:


– Extend of observation. Reports contained at this level of the workflow
are meant to:
• provide an overview of call service delivered by selected linksets
• provide a compared analysis of linksets based on their network efficiency
• identify which linkset is giving a low performance in terms of load
increase/decrease
– Impact Analysis. Reports contained at this level of the workflow are
meant to:
• Analyse the performance of a linkset in terms of standard KPIs
• Identify peak hours with service degradation supplied by the selected
linkset
– Cause Analysis: Reports contained at this level of the workflow are
meant to:
• find errors and causes.

82
Linkset Monitoring
Linkset Overview
•This report requires as input the
Last Forward Linkset Name

•This report contains four objects


–Overall trend curve showing
the daily number of calls for all
selected linksets over the last 2
weeks
–Top 10 linksets who registered
the highest traffic load (holding
time sum normalized by number
of circuits) in Erlang during the
last 2 weeks

83
Linkset Monitoring
Linkset Overview
–This object is composed of a
graph and a table showing the
top 5 linksets with the highest
negative traffic load variance.
The traffic load variance
compares the previous day's
traffic load (holding time sum
normalized by circuit number)
with the value of the previous
week

–This object is composed of a


graph and a table showing the
top 5 linksets with the highest
positive traffic load variance.
Traffic load variance compares
previous day's traffic load
(holding time sum normalized by
circuit number) with the value of
the previous week

84
Linkset Monitoring
Linkset Performance
•This report requires as
input the Last Forward
Linkset Name

•This report contains five


objects
–The first graph shows for
the last 2 weeks, the worst
hour per day considering
min(NER) and #Attempts
>100
–The second graph shows
for the last 2 weeks, the
worst hour per day
considering min(ACR) and
#Attempts >100

85
Linkset Monitoring
Linkset Performance
–The third graph shows
for the last 2 weeks, the
worst hour per day
considering max(Traffic
Load)
–The fourth graph
shows for the last 2
weeks, the worst hour
per day considering
min(NER2002) and
#Attempts >100

86
Linkset Monitoring
Linkset Performance
–This section is composed of four tables.
•The first table shows the worst hour per day in the last 2 weeks (based on
NER 2002)
•The second table shows the worst hour per day in the last 2 weeks (based
on NER)
•The third table shows the worst hour per day in the last 2 weeks (based on
traffic load - Call Hold Time sum normalized by number of circuits).
•The fourth table shows the worst hour per day in the last 2 weeks (based on
Call Setup Time)

87
Linkset Monitoring
Linkset Error Distribution

•This report contains three


objects
–This section shows the top
10 release values which are
the cause of failure, for the
selected linkset. Only failure
release causes based on
NER 2002 are shown
–This section shows the top
10 release values trend,
indicating the failures for a 5
minute interval in the
selected hour

88
Linkset Monitoring
Linkset Error Distribution

–This section shows


tables with the 5 minute
interval details

89
Linkset Monitoring
Linkset Error Distribution
•This report requires as input the 5 minutes interval, the
Release Value Code and the Last Forward Linkset Name

•This report shows calling/called release cause and


originating/destination linkset for the selected 5 minute
interval

90
Summary

• Overview
• Architecture
• Star Schema
• Universe Description
• Standard Reports
• Workflows
• Reports Optimization

91
Report Optimization

• A report is considered optimized if its execution is based on one of


the existing pre-aggregated tables

• The historical analysis (12 months) should be performed only by


optimized reports

92
Pre-aggregation for Point Code

•The pre-aggregated Calling_FDPC table contains the following classes and


measures:
Classes Measures
•Calling prefix •Conversation time
•First originating PC •Hold time
•First destination PC •Call setup time
•Call setup interval •Traffic load
•Conversation time interval •No of seizures
•# Transactions
•Answered
•ASR dialogues
•ABR dialogues
•NER dialogues
•Same aggregations exist for:
–Class called prefix instead of calling prefix (but not both of them)
and/or
–Last Destination point instead of First destination point

93
Pre-aggregation for LFW Link

•The pre-aggregated LFW Link table contains the following classes and
measures:

Classes Measures
•Last forward linkset •Conversation time
•Hold time
•Call setup time
•Traffic load
•No of seizures
•# transactions
•Answered
•ASR dialogues
•ABR dialogues
•NER dialogues

94
Pre-aggregation for MAC

•The pre-aggregated MAC table contains the following classes and


measures:
Measures
Classes •Conversation time
•Major Account Customer •Hold time
•Call setup time
•Traffic load
•No of seizures
•# transactions
•Answered
•ASR dialogues
•ABR dialogues
•NER dialogues

95
Pre-aggregation for Release Values

•The pre-aggregated RV table contains the following classes and measures:

Classes Measures
•Called prefix •# transactions
•First originating PC
•First destination PC
•Last destination PC
•Release value group

96
Pre-aggregation for Trunk

•The pre-aggregated Trunk table contains the following classes and


measures:
Measures
Classes •Conversation time
•Trunk Group •Hold time
•No of seizures
•# transactions
•Answered
•ASR dialogues
•ABR dialogues
•NER dialogues

97

You might also like