The path to gaining visibility and insight into an application and  its underlying network has been a long standing requirement for  enterprises. As a result, there have been a number of players in the  application performance management space and their corresponding market  size has grown considerably over the years. It's not rocket science -  knowing what's happening to your applications carries both operational  as well as top line business value. After all, you need to know when  your system isn't working. Even more importantly, you need to know when  there is a way you can improve the end user experience so they can get  more business done.
Historically, this has been achieved using one of three ways: (1)  Aggregate logs from application servers, NetFlow data from routers, and  syslog data from infrastructure; (2) Use software agents to collect data  from the application server directly; (3) Use network taps to collect  data about applications transactions that include relevant network  performance data. The first two give good data and enable a wide variety  of analysis. The last is arguably the most powerful as it effectively  ties the application details down to the end user experience in a  directly measurable way.
The problem with the last method is that the footprint from which the  information is collected is rapidly losing ground. In order to put  network taps onto the wire, you have to have control of the network and a  place to put them. As an increasing number of applications go virtual  and the use of cloud (both public and private) emerge, these network  footprints to place the tap are altogether disappearing. After all,  could you imagine the conversation where we ask a cloud service provider  to let us drop a network tap so we can see all their packets?
To solve this problem, we have to look to other places where the  information lives but doesn't require a physical change to the network  -- and that place is the existing network infrastructure.
Consider for a moment what something like a NetScaler knows about a  given transaction: It knows the network experience because the TCP/IP  stack needs to track very detailed timing information in order to  optimize what packets it sends; it knows what the application is doing  because it is tracking the app state for the purpose of load balancing;  finally, it knows the server health and performance based on its  network. All of that information, put in the hands of the right  analytics tool, is immensely valuable.
But what about the physical footprint of the NetScaler? I can't very  well ask my cloud provider to use MY hardware in the cloud for the same  reason I can't ask them to put my tap on their network. Furthermore,  which analytics tool makes the most sense? Depending on the need, types  of reports, and other kinds of analysis that can be done, different  tools will appeal to different organizations.
The first matter is easy - the NetScaler is available in a virtual  form. In fact, any modern infrastructure product that can be virtualized  is being virtualized. That removes the issue of physical presence. The  latter is a more subtle problem.
When we started work on AppFlow, we thought long and hard about this.  We could come up with a way of getting deep application insight without  a tap, but if no one was on the other side to take the data and turn it  into something interesting, the data would be useless. Partnering with a  few analytics companies is always possible, but we were bound to miss a  group of users that need another kind of analytics tool. What we really  wanted to do was make it possible for anyone in the analytics space to  get the data and run with it.
The result? We opened up the AppFlow specification and decoupled  ourselves from it. In essence, the data had become democratized --  anyone that wanted to work with the data and make it useful could do so.  Almost immediately we saw players from the analytics space jump on this  and add support. Some we knew, a few we didn't. Even more  significantly, NetScaler no longer was the only piece of infrastructure  that could generate this data. Any device that holds application state  information (e.g., firewalls and WAN Optimization devices) can generate  this data and share a different perspective on the network. The more  perspectives that are shared with analytics tools, the more powerful the  resulting analysis becomes.
From our customer's perspective, the interest has been overwhelming.  Already, at least one customer has put an order for network taps valued  > $1M on hold as they look to see how they can leverage their  existing network footprints. A few phone calls and their analytics  provider is on-board to support AppFlow.
So as you think about what value you want to get from your existing  network footprint, be sure to think about AppFlow. All the data you  need, in a democratized form, that anyone and everyone can process and  turn into business intelligence. You can read more about AppFlow at www.appflow.org.
By: Steve Shah
 
