1.) Learn more about the impact Prometheus has on observability.

This article by The Newstack reviews Prometheus legacy in the software industry and its widespread adoption of open source.

Hatched by ex-Googlers at SoundCloud, Prometheus has grown over the last 10 years to more than 700 open-source contributors and more than a million users. It was also the first open-source tool that integrated trigger alerting within service monitoring, which it does organize into time-series, key-value pairs.

Prometheus was the first to open observability into increasingly complex systems—which certainly weren’t made less complex with Kubernetes—democratizing data so more members of an organization could gain that level of visibility and understanding.

The impact on DevOps and site reliability engineers (SREs) was significant. For the first time, people outside of hyper-scalers had the tools to observe the complexity they unleashed with cloud native and similar scaling approaches. Prometheus was the first tool to allow you to dynamically detect and monitor workloads of arbitrary complexity and deployment—and do math with the data.

Where’s this special tech heading next? There are two main themes driving the Prometheus roadmap:

  • Tighter integrations; and
  • Expansion beyond cloud native software, networks, and power grids.

2.) Find out how to boost Kubernetes container runtime observability with OpenTelemetry.

This blog by Kubernetes explains how to boost container runtime observability with OpenTelemetry.

When speaking about observability in the cloud native space, then probably everyone will mention OpenTelemetry (OTEL) at some point in the conversation. That’s great, because the community needs standards to rely on for developing all cluster components into the same direction. OpenTelemetry enables us to combine logs, metrics, traces, and other contextual information (called baggage) into a single resource. Cluster administrators or software engineers can use this resource to get a viewport about what is going on in the cluster over a defined period.

Kubernetes consists of multiple components where some are independent, and others are stacked together. When we encounter a problem running containers in Kubernetes, then we start looking at one of those components. Finding the root cause for problems is one of the most time-consuming actions we face with the increased architectural complexity from today’s cluster setups. Even if we know the component which seems to cause the issue, we still must take the others into account to maintain a mental timeline of events which are going on. And how do we achieve that? OpenTelemetry to the rescue.

3.) Read more about top-down and bottom-up processes and deployment considerations for AIOps.

This article by CIO presents important considerations before starting your AIOps journey.

Where IT is concerned, there’s no longer a valid business case for the old argument of “doing more with less.” It’s no longer a question of if your organization needs to fully optimize its IT production environments, but why haven’t you optimized them already? To deploy AIOps, there are two general reference models, which we refer to as Bottom-Up or Top-Down deployment.

Deploying AIOps via the “Bottom-Up” model means it is applied at the very foundational levels of the organizational infrastructure IT layer and across all SOPs within that framework. This type of deployment has a longer lead time. Once the SOP learning is in place, AIOps can look at dataflow, how an organization manages master data and start applying organizational use cases to the situations it identifies as actionable.

In the “Top Down” model, AIOps is applied to the most critical business data flows first, then automates others one by one. This approach, while providing a faster ROI, is usually a response to a specific problem that an organization has identified. It might create the illusion that the IA journey is no longer needed.

While these two deployment models outlined are very much “horses for courses,” dependent on the reasoning and needs of an organization, they are not necessarily mutually exclusive. This “hybrid” approach, where organizations can realize value from triaging immediate key problem areas through top-down quick fixes, while simultaneously committing to a bottom-up approach to AIOps deployment can, if carefully planned, present very good options.

4.) ScienceLogic performs well in the 2022 Forrester Wave for AIOps for the third time in a row.

According to Technode, we are continuing ongoing success with strong industry results in the Forrester Wave.

ScienceLogic landed as a Strong Performer in the 2022 Forrester Wave results for AIOps, with highest marks possible in the product vision, execution roadmap, performance, and automation and remediation criterion, along with the second highest marks for the dependency/topology mapping criterion.

These results come on the heels of the company’s acquisition of machine learning analytics firm Zebrium and an extremely profitable, growth oriented Q3. In October, ScienceLogic announced its acquisition of machine learning analytics firm Zebrium to automatically find the root cause of complex, modern application problems. “Organizations must invest in digital transformation initiatives, but they’re worried about the immense cost of the IT teams that must drive those initiatives,” said Dave Link, ScienceLogic CEO. “They must rely on machines to analyze and act on their data, both of which our technology lets them do faster and at lower cost. Zebrium is going to be huge for our continued success.”

Just getting started with AIOps and want to learn more? Read the eBook, “Your Guide to Getting Started with AIOps”»

X