Agent vs. Agentless Monitoring; Why You Need Both

The debate over agent or agentless monitoring in ITSM has evolved. Now, you need both methods to collect data in all its forms and a plan for what to do with it.

Agent or agentless monitoring?

It’s a debate that has persisted over the last decade, and candidly, has been one of our most widely-read blogs. To be sure, both methods have their pros and cons, but as 2019 rolls on and our industry grows in sophistication, it’s time to retire the binary way of thinking that the solution is an either-or proposition.

In modern IT, the answer is that you need both (and more). 

Russ Elsner
Sr Director
App Management Strategy
ScienceLogic

Today’s ephemeral, virtual, hybrid, software-defined technologies emit data in many different ways. Yes, you can gather data from agents or from agentless techniques, but that is just the tip of the iceberg. Technologies and cloud platforms also emit data via APIs, logs, telemetry streams, events, et., al. In addition to the technologies themselves, you can also gain critical insight from orchestration layers and domain-specific monitoring tools.

The initial argument over agent versus agentless monitoring centered on the best way to retrieve data. But the data story – like the argument itself – has progressively gotten more complicated. We know that we need more data; but the new struggle is how to content with volume, variety, and velocity, otherwise known as the three v’s.

Volume – The total amount of telemetry originating from the operational environment has dramatically increased. When you consider logs, streams of data, APIs, their granularity keeps getting better. And as devices continue to grow in their sophistication, they have more layers of abstraction and produce inordinate dimensions of data. 

Velocity – Simply defined as the pace of change – which is accelerating in IT – the forces of velocity are felt by both application and infrastructure teams. App teams are constantly delivering, integrating, and pushing out changes that affect the application and its composition. Accordingly, the infrastructure that the app runs on is changing – containers are spinning up or down, and the software-defined network is reconfiguring on the fly. The demand for app and infrastructure teams to absorb the quick changes is now measured in milliseconds, if not a faster time interval.

Variety – Consider the collection of technology that exists today: there’s cloud, multi-cloud, private and public cloud, different generations of technology, traditional data centers, software-defined data centers – the list seems infinite. Now consider that a typical business service is comprised of many different applications that run in different environments on different generations of technology with different code, but they all need to work together.

When you combine the three v’s, you have an explosion of data – all of which is valuable. But the problem is any one of the three v’s makes it difficult for manual tasks to process, so when all three come together, making sense of the data and trying to keep up with associated tasks has all but become impossible.

To provide real value to the organization, you need to fuse the diverse sets of data together to form a “big picture.” But again, attempting to capture, clean, and standardize the variety of data at the time it’s being produced while making sense of it isn’t possible with manual processes.

This brings us back to the original debate of how to obtain it: agent or agentless? But even the terms “agent” and “agentless” need to be updated from their historical connotations.

Agentless monitoring used to signify traditional element polling.  Today, some of the best and richest data now come from accessing platform APIs.  For example, the best way to understand Containers and Cloud structure is to pull data from Kubernetes or the Cloud API, (AWS, Azure, GCP, etc.).  This agentless data provides a tremendous outside-in viewpoint.

Agents are critical to ensuring an inside-out viewpoint. Agents used to imply a proprietary relationship between the specific agent and its vendor.  That idea is also outdated. Products like AppDynamics, Dynatrace, and New Relic generate rich telemetry about the internal operations of the application. This data too must be fused into a broader perspective.

You achieve true service visibility when you fuse the outside-in with the inside-out perspective to create a comprehensive view.

How to Use Agent and Agentless Monitoring:

  1. Start with agentless monitoring of everything and get the breadth across the entire technology stack within IT. Here at ScienceLogic, the data retrieved provides us with the context necessary to build the infrastructure topology and contextual (north-south) mapping.
  2. Take advantage of existing deep, agent-based application data from APM vendors in the environment. Agents provide deep application-specific performance and transaction data. It also provides the application topology (east-west) mapping that allows ScienceLogic to map the application to the underlying infrastructure.
  3. Augment application visibility to other applications through the use of ScienceLogic agents. In places where you don’t or can’t place APM agents, utilize the light-weight agent from ScienceLogic that can also map the application to the underlying infrastructure.
  4. Integrate events and performance data from other agentless or agent-based monitoring and management tools in the environment.

It’s almost quaint to have the “agentless vs. agentless” debate today. As technology continues to evolve and opens up doors that were once unimaginable, we have to think more broadly about data and put ourselves in position to take advantage of it in all of its forms. But to take advantage of the data, you need a platform that’s capable of contextualizing it and providing deep insights that lead to automation.

You need a data strategy. We can help.

symposium rhs ad

symposium rhs ad