What does this mean for the company’s IT professionals? It means being able to identify traffic flows, application flows and to be able to quickly diagnose problems. Since the applications used could be literally anywhere in some cases, how do you figure out when there is an issue, what the problem is?
Back in the day, you could monitor a link’s utilization to the WAN or Internet, as an example, and if there wasn’t congestion on that line relative to the bandwidth or obvious packet loss, you were in pretty good shape.
Nowadays, users might be connecting over the internal network to resources inside the local enterprise, or using applications over the Internet such as hosted voice solutions or non-real time applications such as 0365, Webex, Salesforce and a host of other applications. Take those considerations and the myriad content that people use for both productive and sometimes non-productive tasks (i.e. YouTube, Facebook, Xfinity, Hulu, and whatever else), how do you manage all of this and make sure users are getting a good experience?
The answer is application awareness and being able to use tools that can measure application performance and or put criteria for controlling applications on a granular level.
If you simply look at traffic as a whole, you would not get the true picture of what is going on in the network. To understand this, tools that can identify applications as they traverse the network and then have some intelligence built around them will greatly assist IT personnel in being able to control traffic and troubleshoot issues.
Some tools that are available on the LAN are things like NBAR, Netflow, Sflow, Jflow, etc., that can sample traffic traversing the network and export those flows to collectors that can then examine the traffic to give a semblance of what is going on internally.
Other tools such as App Dynamics, AppNeta, Solarwinds, Extrahop, and other newer players such as New Relic are all tools that can be used to get visibility into the network from an application perspective.
With the information these tools can provide, IT administrators can create policies and plans to control traffic and also have greater insight into capacity planning and corporate compliance.
Leveraging these tools in a corporate environment, as well as within cloud deployments, can provide valuable data in terms of trends, normal behaviors vs. anomalous ones, and give the enterprise the insight into what their users are using in terms of applications.
In the event of an outage of a service and/or a brown out, the data being collected could be leveraged to understand the nature of the problem more efficiently than traditional methods in the past.
Furthermore, where applicable, applications can be constrained to limit flows on a per application basis or in some cases, on a per user basis, to provide a service to all users that is amenable.
Check out some of the new tools out there to help make your life easier. You will be glad you did!