Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

White Paper

The New DPI: Challenges & Opportunities In The LTE Era


Prepared by Graham Finnie Chief Analyst, Heavy Reading

www.heavyreading.com

On behalf of

January 2011

Executive Summary
Deep Packet Inspection (DPI) has seen rapid growth since its initial appearance in telecommunications about a decade ago, and in no sector more so in the last two years than the mobile sector. Initially, DPI was used primarily in wireline networks, especially cable MSO networks, to help tackle harmful traffic and security threats, and to throttle or block applications seen as bandwidth hogs. These use cases are still at the heart of what DPI does, but its role is evolving rapidly in the mobile sector, to such a degree that the term deep packet inspection looks increasingly outdated. Technical advances along with new applications and key developments in the standards arena all mean that the broad set of technical capabilities centered on DPI are certain to play a central role in mobile networks for many years to come. Questions remain about exactly how it will be used, and what the right evolution path is for service providers as they set their strategies. But it is clear that, as mobile broadband accelerates, and network operators prepare to make the transition to LTE, conventional DPI is entering a new era-- one which will create opportunities and challenges for network operators and equipment vendors alike. In this White Paper, we consider some key aspects of this transition, looking at how and where DPI will be used in LTE networks, how DPI use cases are evolving, and what this means for some of the stakeholders in the DPI value chain. In Section One of this paper, we look at the impact of LTE on the use of DPI and related tedchnologies. Mobile broadband is already transforming the mobile business, but LTE will complete the transformation, and it will change the way QoS, traffic management, customer experience and product development are handled. In LTE, policy control is in effect mandatory, meaning that policy enforcement is an essential requirementand that requirement will be met to a large degree by DPI. In Section Two, we consider the ways in which DPI can be used in this environment to build up network intelligence. Already, many operators are moving well beyond early use cases focused on threat and applications management, and we are on the threshold of a new era in which simple Deep Packet Inspection will broaden into something that might better be called Deeper Product Intelligence. DPI will be deployed for a wider range of use cases aimed at assuring and improving the performance of individual customer services and service packages, and improving customer quality of experiencetrends which undoubtedly will accelerate as LTE is deployed. Section Two examines some of these emerging use cases in more detail. In Section Three, we consider the challenges and opportunities when building and deploying DPI. As traffic loads, subscribers and use cases multiply in the broadband and LTE era, immense strain will be put on DPI, requiring a DPI infrastructure that is more pervasive, powerful, scalable, and adaptable. This section considers these issues and their implications, and looks in particular at the make vs buy DPI case for different kinds of stakeholders.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

I.

LTE And Its Impact

Long Term Evolution (LTE) is one of the biggest changes in the technology environment that the mobile sector has ever seen. At the heart of these changes is the Evolved Packet Core (EPC), described in 3GPPs TS 23.401 and TS 23.402 specifications. Although these describe EPC functions as enhancements, in reality EPC completely overhauls the classic GPRS architecture, replacing it with a much flatter all-IP network that will eventually become a single converged core handling all applications-- including, importantly, existing telephony services. From the point of view of this paper, there are three core elements to consider here: The Policy & Charging Rules Function (PCRF) is a control-plane element that is used to provide dynamic control over bandwidth, charging, and network usage. The Serving Gateway (S-GW), which terminates the interface from the LTE radio access and is the local mobility anchor point for inter-eNodeB handovers and inter-3GPP mobility. Other functions include lawful intercept, and some charging and policy enforcement functions. The Packet Data Network Gateway (PDN-GW or P-GW) terminates the interface from the S-GW and connects to external packet networks. The P-GW provides the mobility anchor across non-3GPP access, interacts with the IMS service layer, and is a key node for policy enforcement.

A key feature of EPC is that it allows bearers to be set up with one of nine standardized quality of services classes, handling a variety of requirements for packet delay and latency (e.g. for gaming, conversational voice etc), though individual operators can customize these classes further. The effect of this model is that the service, charging and biling environment is likely to be more complex than it is in existing 3GPP mobile data networks. Moreover, the policy architecture in LTE is effectively mandatory as soon as the main voice service is part of the service mix, which means that for the first time the policy elements in 3GPP will be used by all operators as they move to the next-generation mobile network architecture. The policy function is not, however, new. Policy first appeared as a concept in 3GPP Release 5, around 2004-2005, but only became a core concept in Release 7, completed in 2007. With the initial LTE specification, contained in Release 8, policy control became in effect mandatory. Further enhancements and changes were added in Release 9, finalized in 2010 (though these did not make any fundamental changes to the policy architecture), and ongoing work in Release 10 (sometimes called LTE-Advanced) and Release 11 will result in further changes. These may have some policy implications, but are not considered here since Release 10-11 features will likely not be deployed in networks before 2012. The overall 3GPP Policy Control & Charging (PCC) logical architecture is shown in simplified form in Figure 1. This makes clear that the core policy relationship is between the PCRF and the Policy Charging Enforcement Function (PCEF), accomplished across the Gx interface.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

Figure 1: A Simplified View Of The 3GPP PCC Architecture

Source: Heavy Reading

decisions The PCEF is responsible for enforcing policy decisions passed on to it by the PCRF, and also passes user and access information to the PCRF to help it make policy decisions. PCC rules for a particular service session include packet filters, contained in a so-called Service Data Flow ticular called (SDF) template, which help the PCEF to identify which packets are part of that session, and the session effect of this is that it must examine packets through Layers 4-7aka DPI. This is achieved aka DPI (again in principle) using both the so-called 5-tuple IP parameters of source and destination IP tuple addresses, source and destination port numbers, and protocol, as well as other IP header parace numbers par meters. This is where DPI begins to play its key role in EPC. Note that so long as PCC rules have o been pre-defined in the PCEF, the standards allow use of parameters that go beyond the 5-tuple defined 5 parameters. This may be a requirement for example, for applications that originate on the Interrequirement, net, since the 5-tuple formula will not always be adequate for accurate identification. mula Because the EPC specification is designed to allow policy to be deployed across network bounbou daries, policy enforcement may also take place in another network, and this is handled in the nd Bearer Binding and Event Reporting Function ( ing (BBERF), located in a visited network (not shown in Figure 1). Policy decisions may also be initiated by the Application Function, which could include (or be linked to) a DPI detection. For example, the AF might detect that a particular application is being initiated (e.g.a video call) and notify the PCRF in order to get a decision on what to do with it. If itiated the PCRF authorized the call, it would then tell the PCEF what to do when it detects the flow associated with this application (e.g. to apply a particular QoS class to it). ciated The standard does not specify in every respect how and where the policy entities are to be deployed. The PCRF is usually conceived of as a distinct entity, e.g. a policy server which may be centralized or distributed but is currently separate from other LTE functions, though some systributed functions tems vendors are now said to be considering integrating it into other entities, for example the o entiti Packet Gateway. However, the PCEF is clearly conceived of in the standard as a part of the P-GW, and this is arly where basic filtering to provide different QoS to different services (charging appropriately where diff charging a necessary via the Gy or Gz to the Online Charging System (OCS) or Offline Charging System y nline

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

(OFCS)) will take place. This means that from the point of view of the standard, DPI will primarily be located in the P-GW, perhaps in the form of a blade, or directly integrated into the equipment, or co-located with the P-GW but in a separate appliance. Its important to note that the P-GW is a gateway to any type of access network, not just 3GPP mobile networks; this is a key feature of the LTE specification. One other reason operators tend to prefer DPI integrated with the S/P-GW node (where the PCEF is expected to reside) is to reduce the risk of complexity that could arise from conflict between PCRF/PCEF-defined policy and DPI traffic management. This is already the case in 3G where progressive operators are making RAN scheduling and congestion a part of their policy decision process, and it will be more so in LTE, since it is an all IP network that must manage features such as voice call admission control over the radio interface. As already noted, policy enforcement may also take place in the BBERF, which is typically part of a Serving Gateway. The Serving Gateway may also, incidentally, be using DPI or something similar for lawful enforcement. It should be noted that some vendors may integrate several of these entities into one unit (e.g SGW and P-GW). DPI may also be associated with or invoked by the Application Function, housed for instance in an Application Server. In a 3GPP IMS network, this AF would be the Proxy Call Session Control Function (P-CSCF), which controls voice and related sessions. Finally, although it is not specifically discussed in the standards, DPI could be deployed in the RAN as well, specifically in the eNodeB, and there are reference designs and silicon under development that enable this. DPI in the base station would allow enforcement close to the customers where this is needed, improving backhaul efficiency and reducing the load on the EPC. These various relationships are shown in Figure 2. DPI may of course be used in an LTE network for purposes that extend well beyond the core standardized role of policy enforcement. For instance, LTE operators are in our view likely to make extensive use of DPI for data mining, profiling and analytics, since in the much more complex service environment of an all-IP mobile broadband network, it will be vital to understand and act on rapidly-changing information about subscriber, application and network behavior, and to improve network intelligence. This is discussed in more detail in Section 2. But from a pure standards point of view, DPI is foreseen primarily as a means for policy enforcement. Even though 3GPP standards specify a sophisticated QoS and bearer management model for LTE, this is complex to fully implement and most networks, at least initially, will use only two bearer classes: a default bearer for data and a dedicated bearer for voice. Different services, such as premium video, may be allocated to specific bearer classes later, but it is nevertheless expected that Internet traffic (i.e. most applications and traffic as we understand them today) will be assigned to the default bearer. In this case DPI technology will likely be needed to differentiate and manage Internet traffic within the default bearer, as is the case today. With the higher performance of LTE, so-called over-the-top applications will also become more sophisticated, which in turn may increase the demands on DPI capabilities.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

Figure 2: Locating DPI in an LTE Network

Source: Heavy Reading

The bottom line here is that there is a widespread expectation expectationnot least among the operators t themselvesthat successful transi that transition to LTE will require more control in the network and more customized service packages and options, something we discuss further in section 2. In order to mized do this, they will need to be able to accurately identify what applications subscribers are running, accurately on what devices and networks, and then apply appropriate rules to them. And that will require use of DPI, which, although it is not mentioned by name in the current Release 9 architecture, is the key de facto means to enforce policies. e Readings As a result of this, Heavy Reading forecast for DPI (contained in its DPI Tracker Service) envisages a gradual shift away from situating DPI in stand alone appliances towards DPI integrated stand-alone into the P-GW (something all vendors of P-GWs are now doing) and perhaps other LTE entities. al entities In the process, DPI will become more ubiquitous. come An important conclusion to draw from this brief outline is that the pressure on the DPI engines and processors will increase massively as LTE is deployed. Policy enforcement based on DPI will LTE be applied to more subscribers (ultimately, all subscribers) using more sophisticated devices; to more sessions per subscriber; to more applications; and in a network environment that is handling far more traffic than it does today. As a result, the capabilities of any DPI elements used will need to be massively scaled up. This is likely to strain the capabilities of the technology, arguably more so in integrated P-GW impleP mentations than in dedicated implementations, though this is a controversial topic. This is disimplementations, topic cussed further in Section 3 of this report.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

II.

Redefining DPI: From Inspection To Intelligence

In Section One of this paper, we established that DPI will most likely play a much more central role in LTE networks than in previous generations of mobile networks, primarily because policy architectures, in which DPI is usually seen as a key underpinning technology, are themselves central to LTE, but also because LTE will accelerate telco efforts to create personalized service packages based on a deeper understanding of customer needs. The first generation of DPI was often deployed to handle a relatively narrow range of tasks focused on identifying applications, analyzing trends in applications usage, and throttling or blocking traffic from applications that posed security threats or were deemed to be bandwidth hogs. All of these use cases will continue to be valid in the LTE era: a flat all-IP network requires a control layer that can accurately identify applications in the network, and DPI will continue to play the primary role here. But as the transition to mobile broadband moves forward, operators are starting to recognize the much broader strategic possibilities offered by advanced DPI technologies. We are, we believe, on the threshold of a new era in which the acronym DPI might come to mean Deeper Product Intelligence, or Deeper Personal Intelligence rather than simple Deep Packet Inspection. As a result, DPI will be deployed for a wider range of use cases aimed at helping to assure and improve the performance of individual customer services and service packages and to improve customer quality of experiencea trend which undoubtedly will accelerate as LTE is deployed. LTE is the first true all-IP mobile broadband environment, with all that this entails for the potential range of services that are offered. And it is the first in which, at least from a technical perspective, existing high-value applications like telephony and SMS become simply two applications among an indefinitely wide range of services that might exist. This perception is widely shared by network operators, as we have found in recent survey work. For instance, in a survey looking at operator attitudes to policy management, we found that whereas previously the focus had all been on traffic congestion and applications management, the new focus is on understanding individual customer behavior and refining controls to meet their needs better, as Figure 3 shows. Figure 3: Subscriber Intelligence and Analytics Is Now Central To Telco Thinking
On a scale of 1 to 5, with 5 being "critically important" and 1 being "not important at all," please rate the following catalysts in your company's decision to deploy its policy management infrastructure. Enable us to apply fair use management techniques Enable us to offer tiered or customized services Improve quality and depth of network traffic and applications reportings Improve our ability to meter and charge customers for service features Enable us to understand subscriber behavior and create profiles
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Percentage of Respondents 5 (Critical) 4(Important) 3 2 1

Source: Heavy Reading survey of 71 network operators attitudes to and plans for policy management, June 2010. Note: Graphic shows only the 5 highest-scoring catalysts from a list of 17 catalysts that were offered.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

In short, the aim of network operators is to create smarter networks that have more network intelligence and give them a wider range of options to meet customer needswhether those customers are consumers, enterprises, media companies or applications developers. So what does a smart network operator need to do? Among other things: Apply yield management techniques to maximize efficient use of resources Gain an ever-deeper understanding of what customers are doing, in real time where necessary Allow customers to apply their own filters to their broadband services (charge me only for this; prevent my children doing this, etc) Warn customers about threats/problems, and offer solutions Enable users (consumers) to control non-flat-rate expenditure in real time (inform/warn; deploy real-time device meters; offer alternatives) Anticipate customer needs and offer services related to patterns of use (analogous to targeted Web destination advertising and e-commerce recommendation engines) Enable enterprises to direct services and applications at relevant customers (those who are in a specific location, using a specific device, on a network with these characteristics etc) Enable developers to refine and target services more effectively, based on detailed (anonymized) subscriber behaviour and usage profiling.

In the remainder of this section we look at three use cases which illustrate these principles in more detail.

Use Case 1: Pro-active Service Optimization Based on Customer Value and Usage Patterns
Refining service offers in the LTE era means understanding who the most valuable customers are, understanding what they value, and having the means to improve their experience. Take, for instance, a platinum tier customer that is also regularly opting in and paying for additional services. By analyzing information from the DPI engine, the operator knows which services the customer valuesfor instance, gaming applications. The operator is alerted when key parametersWeb page loading and jitter, for instancefall below a pre-defined threshold due to congestion, either in real time or historically. When this happens, the customer can be given priority access to network resources to restore gaming service parameters to acceptable levels. This is a service idea that is attracting more attention and has been implemented by several operators, although initially the idea usually deployed today is simply to prioritize premium tier customers when there is congestion. The use case above takes this one stage further by analyzing the premium customers specific patterns of use and ensuring that valued services meet certain minimal standards (in particular for packet loss, delay, and jitter). Other actions that might be taken based on the patterns of use of the premium or valued customer:

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

Shift the customer onto an alternative network infrastructure (e.g. to a WiFi network) when available, in order to improve gaming service performance Alert operator when the customer stops using applications of known value, and take appropriate action Alert operator to significant changes in the customers application use patterns that might be a trigger for an upgrade offer Alert operator when an unknown/new device is utilized by the customer Allow the service provider to offer related services from third party partners, based on patterns of use among similar customers

With suitable adjustments, network operators can apply these principles to a wide range of other customers. For example, they could identify highly cost-conscious customers with limited service needs and offer them appropriate service packages that induce loyalty and prevent churn. A good current example is the 0.facebook.com service that has been deployed by a fair number of operators in developing countries, offering unlimited access to Facebook within services that otherwise have low data volume limits. Again, this use case could be more sophisticated than it currently is. Rather than the operator offering access to a service (Facebook) subjectively identified as valuable to the majority of customers, the operator could identify valued services on a subscriber by subscriber basis and (if cost-justified) offer one of those services on an unlimited (or discounted basis).

Use Case 2: Parental Control & Filtering Dashboard


Parents face an ever-increasing dilemma as they try to control what their children can do while connected to broadband service networks. As mobile broadband and smartphones spread to new and younger demographic categories, and more and more smartphone applications emerge, children are spending more and more of their lives attached to and using the Internet, often while physically separate from their parents. As a result, parents ability to monitor their childrens activities is becoming increasingly problematic. This provides a clear opportunity for operators to extend and enhance parental controls (often associated today with fixed network terminals in the home), using the increasingly granular information provided by advanced DPI engines. Via a simple set of dashboard controls, telcos can: Enable parents to customize the filtering out or blocking of undesirable applications and sites Enable parents to limit use of certain services while children are in specific locations (e.g. school), or at specific times Alert parents when a new security threat is detected Alert parents to a significant change in usage patterns (analogous to credit card use checks) Alert parents to charging events of any kind Alert parents to undesirable content in e.g. emails, IM chat sessions, etc

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

Provide parents with detailed analysis of their childrens Internet usage behavior

Parental control is highly likely to grow as an application, and using DPI to build more sophsticated controls is a clear opportunity for network operators that no other entity in the broadband value chain is in a position to offer. Again, this is a use case that could be extended to other areas. For example, one idea that has attracted attention is to monitor the activities of vulnerable elderly people in order to reduce Internet-borne threats, or specifically to monitor health.

Use Case 3: Analytical Toolset/ Service Refinement for Applications Developers


Nearly all mobile network operators want to create stronger relationships with third party content owners, application developers and Web site portals. As the app store model has prospered, the operators are trying to revive efforts to build, identify and offer valuable resources and information to these potential partners. The question is, what resources do network operators have that these developers and portals might value? One source of obvious value is the immense amount of information that operators collect on the patterns of broadband use of individual customers or aggregated customer groups. Only network operators have a holistic view of the customer, and by collecting this data they are in a position to offer a range of analytical tools, both historic and real-time, that help developers and content owners refine the way services are offered. Among the questions that the operators can answer: How long are customers spending on the partners site or app versus comparable sites or applications? What is the overall quality of experience of the partners customers (and if poor, how might it be improved)? Where are they using the service or content? On what devices? At what time of day? What else do customers do? What do they buy? Are there related services and content the partner should be offering? Are there related customers or customer groups at whom this service might be targeted?

This information is the starting point for actions of various kinds. These might include prioritized QoS (especially under congestion conditions) for the partner application; unlimited or uncharged data usage models, on a shared revenue basis; targeted advertising; automated adaptation to device or location; and so on. These use cases show that smarter network operators are in a position to build refined controls for the broadband network environment that can be used not just by their own internal product groups, but also by end users, content owners and developers.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

10

III.

DPI In The LTE Era: Meeting The Technology Challenge

In this section we will consider the challenges and opportunities when building and deploying advanced DPI in the mobile broadband network. As traffic loads, applications, subscribers and use cases all multiply in the broadband and LTE era, it will put immense strain on DPI, requiring a platform that is more pervasive, powerful, scalable, and adaptable. This section will consider these issues and their implications, and look in particular at the make vs buy DPI case for different kinds of stakeholders. It is clear from Section 1 that there will be a step-change increase in the use of technologies such as DPI that improve the ability of network operators to analyze the behavior of customers, applications and networks and offer more refined contols to all actors. In this emerging environment, a sophisticated ability to examine traffic and content will take technology far beyond simple DPI. Capabilities here will include An up to date view of Internet applications protocol signaturesan area of continual and accelerating change The latest heuristic and behavioral tools for packet flow analysis to identify disguised or encrypted traffic Detailed insight into content (e.g. not just identify P2P, but identify specific types of P2P content in order to filter good from bad content) User/service behavioral analysis to refine the performance of services and service offers Ability to extract more information from Web sessions (set-up time, duration, device etc), sometimes called metadata, to improve customer experience, meet legal data retention requirements, and so on

The DPI engine must also cope with accelerating trends in the broadband networks which reinforce each other, creating a powerful multiplier effect: More broadband service customers, with ultimately all mobile customers becoming broadband customers Greater proportion of customers to whom DPI decisions must be appliedin the LTE era, in fact, this is likely to mean all customers More customers accessing Internet services, generating a need for more intelligent handling of traffic More traffic per customer as a result of the transition to LTE More applications and services per customer More powerful devices with new capabilities such as multi-tasking, HD video handling, etc More policy & DPI decisions/transactions per customer

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

11

For all the stakeholders in the market, this will put great pressure on the performance of core technologies used, and will re-ignite the question of what to outsource and what to keep in-house. For operators, we believe, this means understanding the whole value chain in DPI and related technologies, and ensuring that purchasing decisions do not bind the operator to proprietary system solutions which close off or limit access to state of the art improvements. Although simple DPI capabilities will increasingly be built into general purpose network equipment such as gateways, as discussed in Section 1, there are questions to be asked about whether capabilities developed directly by the major telecommunications equipment manufacturers (TEMs) are fully adequate to the range of tasks set out in Section 2. The core business of most TEMs remains the supply of network equipment such as switches, routers, gateways, RAN gear (e.g. base stations) and so on. TEMs do also usually have capabilities in the control layer, and in general TEMs are putting emphasis on higher layer functionality, in areas such as billing, subscriber data management, policy management and service delivery. Many have reasoned that for strategic reasons they need to have in-house development expertise in areas such as policy management (policy servers), and they have built teams or acquired companies to do this. This includes not just policy servers, but also policy enforcement, including both integrated and stand-alone DPI. However, the more distant these developing capabilities are from core telecommunications engineering expertise, and the more challenging the development cycle, the bigger the question as to whether it makes sense to insource all of this development. The key questions for TEMs are: Is the value of this capability rising for our customers? Do we need these capabilities for strategic reasons? Is this adjacent to our existing skill-sets? Do we have the core skills in-house to add this? Is this domain evolving rapidly? Will the cost of insourcing exceed the cost of outsourcing? Can we easily acquire and retain these capabilities from reliable suppliers? Can we easily integrate a third party product into our solution? Can we still differentiate ourselves if we buy this capability in from a third party?

As this list suggests, the issues here are somewhat complex. The skill sets of most TEMs clearly tend to lie in the lower layers of the network, and not in advanced computer processing and software. Most TEMs, however, do not want to be confined to these lower layers, and have been busy increasing their capabilities in some upper layer areas as well for strategic reasons. However, the increasing tendency in equipment manufacturing to use standardized, modular hardware platforms such as ATCA with standardized interfaces between modules is a clear indication of a continuing trend away from proprietary, vertically integrated solutions that include every capability. ATCA does not mandate the use of third-party software and application-ready systems, but equipment providers can we believe further reduce time to market and limit development costs by taking advantage of these solutions. This does not mean that TEMs must give up on the idea of creating end to end offers, and on being the prime contractor in any overall solution; rather it means there has been an increasing tendency to use best of breed solutions to fill in areas which are not optimal for internal development programs. Another issue here is that network operators tend to be skeptical about TEM claims of expertise in mainstream IT and software applications areas. For example, in a survey of network operator

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

12

purchasing plans in the policy management area, we found that while a majority of respondents expected to use TEMs as the main suppliers of bandwidth management products, a majority expected to use specialists for DPI, and only a small proportion expected to use TEMs as suppliers of applications & content management products. Our sense currently is that there is a trend to make greater use of niche specialists in the DPI and applications management areas, and we expect this trend to strengthen as the pressure on DPI engines increases inexorably over the next few years. This does not mean that TEMs will not do any internal DPI development, but it does mean that they will likely seek partners to fulfill more advanced capabilities (and in some cases, all capabilities in this area). Outsourcing highly specialized technical capabilities allows TEMs to focus on overall solutions development and professional services, areas where they have been very active recently. In sum TEMs will likely outsource more capabilities to third party niche specialists, for four main reasons: Market and technology development cycles are both speeding up requiring continual upgrades and shorter release cycles in this area There is a requirement for a wider set of capabilities than hitherto to meet overall customer goals, demanding more specialized IT skills and straining internal capabilities Internal cost and resource constraints are increasing for TEMs as competition intensifies The existence of a flourishing supplier ecosystem and standardized approaches to hardware design allow easier integration of third party technologies

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

13

Appendix A: About the Author


GRAHAM FINNIE CHIEF ANALYST, HEAVY READING
Graham has 20 years experience in the telecommunications sector as an analyst and consultant. He joined Heavy Reading in September 2004 following a ten-year tenure at the Yankee Group, where he had directed a European broadband & media research program. He was appointed Chief Analyst at Heavy Reading in February 2007. As well as setting the overall direction of Heavy Readings content, Graham has been responsible for a wide range of research, focusing primarily on next-generation broadband service and application architectures. His recent publications include Re-Inventing the Telco: A Heavy Reading Progress Report and Policy Control & DPI: The New Broadband Imperative.

HEAVY READING | JANUARY 2011 | WHITE PAPER |THE NEW DPI

14

You might also like