If you ever find yourself navigating the back roads of Maine and stop to ask for directions, don’t be surprised if the old farmer in his green cap and Dickies slowly drawls, after giving it considerable thought, “You cahn’t get theyah from heah.”

Russ Elsner
Sr Director
App Management Strategy
ScienceLogic

The idea is that the topology in that corner of the world presents a host of problems for locomoting. Mountains and gullies, rivers and lakes, lack of signage, an abundance of signage, dirt roads, landmarks and lexicon known only by locals, seasonal variations, and a lack of connectivity for the GPS-dependent all conspire to make navigation difficult under the best of circumstances. It sounds a lot like typical network topology, come to think of it. But while a meandering trip over Downeast byways might be an appealing way to spend an afternoon, it’s no way to manage your IT infrastructure.

In fact, a 2016 Forrester report entitled IT Efficiency Begins With Effective Discovery and Dependency Mapping found that enterprises engaged in IT projects like virtualization and server consolidation were hindered because they did not have a complete view of dependencies (56%), did not know what resources were required by the various applications (36%), and lacked a complete view of all the applications in use by the enterprise (31%). That is a lot of blind spots. And it is why comprehensive enterprise discovery, contextual topology, trusted data, and application dependency mapping are critical to IT operations. They are complementary components of managing today’s IT ecosystem.

You can’t manage what you don’t know and, with the complexity and ephemerality inherent in today’s networks, that information can change from moment to moment. In a 2018 white paper, Next-Generation ITAM Building for Tomorrow’s Use Cases Today, Enterprise Management Associates (EMA) identified the top ten challenges for efficient IT operations. All ten were either directly or indirectly associated with incomplete or inaccurate data, including:

  • Operational inefficiencies;
  • Incomplete data;
  • Fragmented technologies;
  • Lack of analytics in optimizing data from IT assets;
  • Poor data quality;
  • Sharing data effectively across IT silos;
  • Low or inadequate levels of automation;
  • Fragmented/siloed technologies;
  • Unable to measure operation efficiencies; and,
  • Communication/process issues.

What is application dependency mapping?

Application dependency mapping is pretty much what the name says it is: a process of identifying all the elements in an ecosystem and understanding how they work together. It’s connecting a lot of dots to give IT managers a clear picture of their environmental health and overall application performance. Application dependency mapping is important for IT operations management because when things go wrong, it is critical that you quickly identify the points of failure and find the fastest path to recovery.

Every application—from a complex e-commerce application to something as common as email—has a lot of moving parts. If performance is slow, application dependency mapping lets you know where to look to understand where the bottleneck is, what resources might be overtaxed, and how to resolve the problem. If an application isn’t working, the application of a reliable dependency mapping tool is the key to identifying where there might be a disconnection or if something needs to be replaced.

It stands to reason then that if your map is incomplete or out of date, you may not be able to find the problem and you won’t be able to get there from here. There might be a workload you don’t know about—or that you thought had been retired—sapping compute power from an application you rely on and if you can’t find the source of the problem you will find yourself on a wild goose chase. But with acomprehensive discovery of the entire IT infrastructure, contextual network topology, and trusted data behind your application dependency mapping, you can get there from here. The EMA report found that enterprises that were the most successful at IT management were “likely to have an application discovery and dependency mapping (ADDM) capability deployed.”

While the concept of application dependency mapping is self-evident, the implementation of mapping tools is a little more complicated. There are different approaches and levels that can produce different results. There are also misconceptions about application dependency mapping that can hinder decision-making and achieving optimal results from your investment in application mapping.

Four Methods of Application Dependency Mapping: The Pros and Cons

While the concept of application dependency mapping is self-evident, the implementation of application mapping tools is a little more complicated. There are different approaches and levels that can produce different results when application monitoring. Depending on which approach you use for automated discovery and mapping, you may be at risk of not seeing your IT ecosystem in its entirety. That’s because some older systems are engineered to operate in a specific environment or for specific management platforms.

Within the realm of agent and agentless discovery, there are three well-established techniques for accomplishing application dependency mapping, all of which have their pros and cons:

PROS

  • + From a singular location, you can remotely sweep an entire network structure or data center.
  • + From an instrumentation perspective, its lightweight ability makes it incredibly attractive.

CONS

  • −The dynamic nature of today’s IT environment (VMs, containers, auto scalers) complicates sweep and poll’s ability to accurately survey and capture what’s taking place and changing within an ecosystem.
  • −Sweeping a data center takes a long time, leading some organizations to do this nightly or even weekly, creating a “strobe light” effect: you can see when the light is flashing, but much can change in the subsequent period of darkness.
  • −Fingerprints work fairly well for off-the-shelf applications but are not helpful for custom applications or applications not in the fingerprint library.
  • −Sweep and poll is poor at learning the application dependencies between the different application components.

PROS  (NetFlow & Packet)

  • + Sees network traffic in real-time, so dependencies and changes can be detected immediately.
  • + The actual truth. Oftentimes, the operational deployment differs from the initial design. The difference between a developer’s whiteboard sketch and what is really happening can frequently be dramatic.
  • + Not dependent on pre-built blueprints. It isn’t dependent on foreknowledge of what the application should look like and looking for that blueprint.

CONS (NetFlow)

  • − Scale – NetFlow is great for WANs, but not so good in data centers (where applications live). Modern link speeds (10 gig, 40 gig, 100 gig) can produce billions of flow records from every interface. Since traffic will likely cross multiple devices, many flow records are duplicates (but not identical for interesting reasons). Network devices (in general) cannot NetFlow on all interfaces at datacenter speeds due to the processing burden. Furthermore, few monitoring tools can process and analyze the massive volume of raw flow data.
  • − Flow records only show the IP address and TCP port. They cannot differentiate application-level dependencies. For example, two web applications (A and B) are hosted on the same server. With NetFlow if you see an outbound flow to a backend database, there is no way to tell if the database traffic is part of application A or Application B.
  • − Technologies like NAT, load-balancers, firewalls, proxies, and tunnels complicate the process of piecing back the flows into the broader application.

CONS (Packet)

  • − Cost and placement: packet-capture appliances connect to specific points of your network/datacenter and capture packets. This data can be used to map application flows (similar to NetFlow). However, they only provide visibility where the probes are placed.
  • − It can be expensive—and sometimes impossible—to put packet capture everywhere you need (or want) to see every application flow in a data center. This leaves islands of visibility and not the holistic view that you were initially seeking.

Agents can provide a real-time monitor of both the incoming and outgoing traffic to find and understand every component and immediately recognize changes to status as topology changes.

PROS 

  • +Agents perform in real-time, so if something spins up or down, they will capture and report what’s taking place, offering immediacy benefits that are increasingly relevant with today’s use of ephemeral technologies.
  • + It’s easier to push agents out than it is to push expensive pieces of hardware, which makes it less expensive than using packet pieces.
  • + Agents have an internal resolution that allows them to differentiate between different applications running on the same IP address, which can take on increasing significance if you have multiple applications on the same server.

CONS

  • − You need to put agents everywhere, or else you run the risk of not having complete visibility.
  • − You have to know what you’re trying to monitor and not forget to put an agent on it. This issue could potentially be negated because agents have the ability to see one hop away, but it’s entirely reliant on user memory.
  • − Cost could potentially come into play. The price of putting an agent on every server could quickly add up.

There’s also a fourth source of application dependency mapping emerging, which leverages orchestration platforms themselves. Platforms like Kubernetes, Cisco CloudCenter, or ACI deploy and maintain all of the underlying application components. As a result, the orchestration knows at any given point what individual components are part of a specific application. Since today’s IT ecosystem is highly ephemeral and you constantly need to monitor, measure, and manage what’s taking place within the environment, a hybrid strategy that combines the best of multiple practices is required. For example, the ScienceLogic SL1 Platform can combine application maps from AppDynamics and augment them with maps from our agents to provide a real-time, thorough view of what’s taking place within the IT ecosystem.

Application Dependency Mapping: An Ideal State

As you transition to Artificial Intelligence for IT operations (AIOps), application depending mapping is necessary, but it’s not enough. Why? Because application monitoring is insufficient on its own. Since there are so many different layers of technologies making up your IT infrastructure—from legacy to virtual machines to the cloud to microservices—that means that there are an increasingly complex set of dependencies between the app and the infrastructure. Changes to the application and underlying infrastructure occur so quickly that we are past the point where humans can figure out how these parts are related. We need a machine to do it for us. Today AIOps is not just a buzzword or something to think about doing next year. It’s become a necessity. You can’t afford not to. You cannot do AIOps workflows without a strong topology. The image above represents a typical, complex, multi-technology application, or business service. It shows the relationships between the application and the underlying layers of the infrastructure. What we do at ScienceLogic is:

  • First, we map the application components east-west.
  • Then we map the underlying technologies into north-south stacks, giving you a complete picture of the application or “business service.”

This way, if/when something goes wrong anywhere in your infrastructure—which is often shared across multiple applications and business services—you can quickly identify the impact and isolate the root cause. Agents can monitor east and west between servers by watching application flows. But the north and south dependencies mostly come from other sources. ScienceLogic has patented PowerMap technology that discovers and maintains the north-south dependencies across all the layers of abstraction for private, public, and hybrid cloud environments. This true dependency map is the foundation of algorithms for event correlation, root-cause analysis, and behavioral prediction.

Application data mapping at ScienceLogic.

You have an APM tool. Great! But is it enough?

If you have an Application Performance Management (APM) tool like Dynatrace or AppDynamics, that’s fantastic. These tools give you application dependency mapping capabilities that provide you with a consistent view across your apps. But your infrastructure is not just made up of applications, is it? What if you have a LUN failure that is causing your app to underperform, like in the image above? An APM tool alone does not give you the root cause of why your app is underperforming. But the ScienceLogic SL1 platform does.

SL1 fuses events, performance, configuration, and relationship data from your APM tools like Dynatrace and AppDynamics with the rest of your infrastructure data, and it maps the APM tool’s hosts to the physical and virtual infrastructure—not just servers, but network, storage, and cloud as well. SL1 works with your APM tools to connect the dots by mapping the app relationships from the APM data with the app and infrastructure relationship data it collects natively across your entire IT environment. You get visibility across ALL your infrastructure and apps—not just the 10-20% of apps monitored by your APM tool.

Since applications are constantly evolving, you need the right application dependency mapping to provide you with the actionable insights necessary to accelerate your journey to AIOps. ScienceLogic is here to help you get there.

Want to learn more about APM? Read the analyst report, “Reimagining APM in the AIOps Era.”

X