Network Traffic Monitoring

Is Network Traffic Monitoring Failing – Customer Complaints Remain Number 1 Problem


Thursday, January 05, 2012 | Julian Palmer

Despite network monitoring tools consuming hundreds and millions of corporate dollars, customer complaints for poor service are at an all-time high consuming even more of the corporate budget.

To understand why network monitoring fails in its objective, it is first necessary to examine how the use of networks has transitioned in the last ten to fifteen years.

Network traffic patterns have dramatically changed. With few exceptions, the early years of the internet were driven by predominantly batch orientated applications; email and simple browsing dominated the everyday usage model. Naturally technology evolved and its appeal broadened, today’s usage model is a stark contrast. Fueled by the growth of mobility and media rich applications, today’s usage model delivers an entirely different challenge for service providers. The demand for capacity has never been more extreme and the natural human demand for rich media is exclusively 'time dependent'.

The problem, time dependent data and packet policies don't mix.

We have all been there at one time or other, we are traveling from 'A' to ''B' for an application purpose that is both important and time dependent. It might be a connecting flight, it might be a taxi ride across town to catch a Broadway show or simply driving for a job interview. Suddenly you encounter an unexpected travel delay that threatens your ability to arrive in time to consummate the purpose for the journey. The more vital the application purpose the more severe the impact. In contrast, non time dependent travel is entirely different, delays become annoying but the application purpose for the journey is seldom threatened.

[ White Paper - Understanding Internet Speed Test Results - The problem is not in the measurement, it is in understanding the test results as they relate to the application problem being experienced (PDF) ]

Driven by an insatiable demand for social orientated applications, the Internet of today has become a crowded place to live. Unfortunately for the application service providers, this crowded climate delivers a predominantly hostile environment for applications that are heavily dependent on time and available capacity for a quality user experience.

If the application mix was not enough of a challenge for network owners and other service providers, the architecture of the Internet provides two more significant obstacles that materially impact the user experience. First, networks are founded on protocols that only underwrite a 'best effort' delivery, meaning there are no guarantees for timely delivery. Second, the network chain, that establishes the end-to-end connection, between the client and the application server, encompasses many different network providers over which the application provider has no influence or control.

If you exclude packet corruption which is relatively rare, application packet traffic is subject to the same delay threats as experienced by road traffic. Delays are either congestion based, or regulatory based, and the solution to resolve the former will certainly fuel the latter.

Road traffic has a number of significant similarities to network traffic. Cars are synonymous with packets, both need to travel at speed, both consume capacity on the highway, both can be time dependent or not, both suffer when available capacity becomes limited, both demand a regulatory process to establish order out of the resulting chaos when demand exceeds available resources,.  Necessity, being the mother of invention, it is therefore not surprising that we find the traditional solutions, which have evolved over time to regulate contention and the flow of road traffic, have also found their way into the realm of network traffic.

The similarity of management approach between road and network traffic gives rise to several leading questions. How does the global requirement to regulate network traffic, mandated by excessive application demand and the requirement to ensure a better customer experience, balance itself, when the application delivery chain end-to-end encompasses the many different business agendas of the individual network providers?
 
How are network priorities and policies evolved and agreed? More importantly, keeping in mind that it is in the interest of any provider to exclude any and all traffic entering the network that is not destined for a customer on their network; how do the policies differentiate when it comes to packets that are time dependent versus those that are not?  For example, "what if” a packet enters a provider’s network with a priority; does the packet priority survive if the network policy is to redirect the packet off the network? 'What if' the packet entering a provider's network is from a competing application to one offered by the provider? "what if" the volume of packets from the competing application consumes a material percentage of available capacity impacting a broader spectrum of the provider's customer base? Should traffic that is identified as "business" class take priority over traffic that is branded "consumer" class, even if the consumer traffic is time sensitive but the business class traffic is not? In essence, who should win the priority assessment? Remember the provider's regulatory policies only exist to underwrite the service guarantees necessary to deliver a good customer experience; failure to achieve this purpose will surely have a negative effect on customer growth and ultimately customer retention.
 
So where does this all lead? As demand for media rich applications explodes, fueled by the staggering growth of smart devices and a wide variety of ‘desirable’ media based services, the profile of the resulting network traffic patterns have dramatically changed. Today, with few exceptions, this change is interpreted by most to be a demand for speed, everyone is obsessed by speed, megabits this and megabits that! It is therefore hardly surprising that the many competing providers battle aggressively on 'specmanship' of Megabits per second much as camera manufacturers rant on the virtues of Megapixels in photography. In reality, what is needed is not a requirement for speed but a requirement for quality. Unfortunately quality of time dependent data is a difficult to monitor because the timeliness of a packet is essentially unknown.
 

As an example of why conventional network monitoring fails when it comes to time dependent data, let’s examine the simple task of monitoring a road. Standing on the side of a road an assessment can be made on the state of traffic through a series of observations. Is the traffic moving? Is the traffic moving close to the speed limit? If not, is it because the road is too crowded? etc etc. If all measurements are within tolerance the report will be ‘good,’ if not it will be ‘bad’. Is such an assessment helpful?

Unfortunately network monitoring is of minimal value when it comes to time-dependent packets.

As regulatory policies are aggressively used to manage traffic demand and also police compliance of capacity limits, delays are naturally injected into the traffic stream. No monitoring process can therefore assess that any one car, which is consuming capacity on the highway, will reach its destination in time to fulfill its application purpose.  It might be that one of the cars is heading to the airport but is sufficiently late that the flight will be missed. In essence network monitoring cannot assess that any application will meet acceptable service levels.

There are further complications that erode the value of monitoring. In the case of ‘usage compliance’ it is common practice for providers’ policies to simply discard packets that exceed allowable rates. The problem with this approach is that a large percentage of packets get recovered. This means that a packet can be sent many times before it is accepted at the application end.  Discarding the packets may limit demand for a moment in time when in reality it dramatically increases demand and reduces quality. Duplicate packets are identical so monitoring cannot distinguish or assess the problem

The transition to traffic patterns that are materially dependent on timely delivery will always be adversely threatened when the segmentation of the underlying transport crosses the boundaries of competing commercial companies, especially when these competing entities openly sport different agendas for regulation policies of application traffic.

The quality of time dependent network data delivery is about consistency not speed. Consider a customer who subscribes to a 10mbps cable service which the provider delivers by regulating the customer's data to 1 second out of every 10 seconds on the 100Mbps cable. Yes, this regulatory policy results in a 10Mbps service, however what impact does a 9 second regulated policy delay on applications such as a movie, a live TV broadcast or a simple VoIP call?  9 seconds of policy invoked delay, traffic lights if you will, cannot be compensated for by 1 second of high speed 100Mbps bursting data.
 
Businesses must find answers to two significant questions.

Given that, traffic policies are ultimately driven by conflicting business agendas; Given that these policies govern data transport; Given that these policies are also, in the most part, beyond the control of the service and support group, including the customers own network; Given that network monitoring, while useful, cannot address issues of time-dependency; and finally; Given that time dependent rich media services is the exploding market:

1. How can the problem of guaranteeing service quality be resolved with any certainty of success?  and as a follow on…

2.  What solutions are available that help service providers surpass simple monitoring to assess network quality end-to-end such that they can better deliver (and support) quality services to any customer.
 
 
This is the first in a series of articles addressing the issues surrounding network traffic monitoring and the importance of 'quality of service', not just 'speed of service.'  The second article is: Do ‘Network Monitoring’ solutions Help or Hinder the Online Customer Experience?
 
 
About the Author

Julian Palmer has spent more than 30 years in technology specialising in operation management and service delivery. In his current role of Chief Technology Officer of Visualware, Julian has leveraged his experience and detailed knowledge of application data flow to evolve solutions for measuring and reporting network quality.
  

Founded in January of 2001, Visualware, Inc. is a leading creator of solutions to manage and report the true customer and end-user Web experience.

Visualware’s Enterprise solutions include a suite of products and OEM technologies that enable any business or organization to view, report and troubleshoot a customer's actual transaction response time anywhere in the world -- as it happens in real time.    Lean more about Visualware 


 

  • Print
  • Send to a friend