1. Learn how General Datatech plans more acquisitions after getting a new investment.

This piece in CRN details how the new leadership at General Datatech made changes, including using ScienceLogic as part of their platform and are anticipating a period of unprecedented growth by making some carefully selected acquisitions.

General Datatech, more commonly known as GDT, also hired a new chief revenue officer, the latest in a string of executive hires aimed at preparing it for further growth, said Tom Ducatelli, CEO of the Dallas-based solution provider. The investment in GDT by H.I.G. Capital, a Miami-based provider of debt and equity capita.

“A big part of the discussion between GDT and H.I.G. was, what would that look like four years down the road, and what economies of scale we can offer,” Ducatelli said. “GDT is invested significantly in world class systems. We use SAP, we have ScienceLogic as part of our platform.”

2. Pursue monitoring, but do not forget observability.

As important as monitoring is, observability is paramount to guaranteeing quick, accurate responses to resolve issues quickly.  This article in RTInsights.com explains the necessity of spending equal time and resources on observability, as well as monitoring.

Observability shares characteristics with monitoring on an elevated level. Monitoring crawls applications for signs of breaches, breaks, and other challenges. The software pings the appropriate person or department to alert them that something needs to be fixed. Observability also looks for problems. They both support the overall technological health of the organization. Both improve performance and reliability and allow key members of the organization to watch for problems without the time-consuming action of manual monitoring.

Monitoring and observability draw on multiple data sources to ensure that operations run smoothly. They collect information about all involved data environments and provide visibility into what is happening. Observability, however, takes the process to the next level in one fundamental way. Without this step, companies can remain at a disadvantage during red flag alerts.

3. Explore six AIOps hurdles to overcome.

An article in CIO.com discusses how to overcome obstacles to successfully manage IT operations through AIOps.

IT operations teams have a lot to juggle. They manage servers, networks, cloud infrastructure, user experience, application performance, and cybersecurity, often working independently of one another. Staffers are often overworked, burdened with excessive alerts, and struggling to solve problems that involve multiple domains.

Enter AIOps, a burgeoning field of technologies and strategies that inject artificial intelligence into IT operations to solve challenges face by IT operations teams by reducing false positives, using machine learning to spot problems before they occur, automating remediation, and seeing a holistic view of the enterprise.

Here are six potential challenges of switching to AIOps:

  • No clear strategy before adoption,
  • Poor or incomplete data,
  • Inadequate coverage,
  • Paying double when teams are not ready to give up their preferred toolsets,
  • Missing the big picture, and
  • Culture change regarding the shift to AIOps.

4. See how AI will enhance the world of monitoring and management through observability.

This article in VentureBeat explains the importance of data observability.

To fully embrace observability, the enterprise must engage it in three different ways:

  • First, AI must fully permeate IT operations since this is the only way to rapidly and reliably detect patterns and identify root causes of impaired performance.
  • Secondly, data must be standardized across the ecosystem to avoid mismatch, duplication and other factors that can skew results.
  • And finally, observability must shift into the cloud, as that is where much of the enterprise data environment is transitioning to as well.

The problem is that observability is viewed in different contexts between, say, DevOps and IT. While IT has worked well by linking application performance monitoring (APM) with infrastructure performance monitoring (IPM), emerging DevOps models, with their rapid change rates, are chafing under the slow pace of data ingestion. By unleashing AI on granular data feeds, however, both IT and DevOps will be able to quickly discern the hidden patterns that characterize quickly evolving data environments.

Just getting started with AIOps and want to learn more? Read the eBook “Your Guide to Getting Started with AIOps»

X