News Roundup, March 13, 2020: What’s Happening in AIOps, ITOps, and IT Monitoring
On this day in 1781, British astronomer William Herschel discovered what he thought was a comet but later realized he’d discovered the planet Uranus.
1. AIOps injects intelligence into IT operations.
According to an article in CIO, organizations seeking to monitor IT assets are turning to artificial intelligence to get ahead of performance issues and to automate fixes.
Data is at the core of modern business, and the digital transformation of enterprises, factories, the IoT, and just about every possible consumer experience is creating a staggering amount of data. AIOps can process this data automatically and without human intervention. This makes AIOps adoption the next logical step as cloud platforms, managed service providers, and organizations undertake digital transformation.
More advanced deployments are beginning to use AI systems not just to identify problems, or to predict issues before they happen, but to react to events with intelligent, automated mitigation. To do this, companies need automation tools that can collect massive pools of information, apply analytics, reduce the noise, and drive faster problem identification and resolution.
A problem with application performance may be due to a software issue, a networking issue, or a hardware issue. In a multi-cloud environment, the root cause can be in one cloud, or in another cloud, or be the result of a combination of factors.
If your AIOps infrastructure is fragmented, finding and fixing the root causes of problems can be a challenge. In the words of our CEO David Link, “Then you’re back to hand-to-hand combat, where every group has its own tools. If you have a unique tool for every application initiative, you can’t scale the enterprise that way.”
AIOps also has great ability to be a revenue generator. To see the full potential of AIOps, you should look no further than the managed services provider (MSP) industry. For MSPs, AIOps means higher efficiency, lower costs, and faster resolution times — all significant competitive differentiators in this sector.
2. Top IT organizations share four secrets to their success.
What does it take to be a top IT performer? An article in TechBeacon reveals four trends and best practices used by the most successful ITOps organizations, according to a survey by Digital Enterprise Journal, for provisioning, deploying, monitoring, and managing enterprise IT systems. The four secrets are:
- Take a proactive approach. Proactive tools are starting to appear, but their capabilities are not being leveraged by many IT operations. These tools can tell them when something fails, but they can also tell when things are about to fail. The best ITOps take advantage of these tools to gain real-time visibility into their systems. They also have a culture willing to look for potential problems and solve them before the complaints start rolling in.
- Deliver the right data. Monitoring data is useful only if the data can be delivered in the right context, which means having meaningful insights about what the data means. Without this contextualization, ITOps is left with too much noise to arrive at actionable conclusions about that data.
- Use quality of experience as a key indicator of performance. As demand increases for more clarity about the impact of IT operations on business goals, having full visibility into the quality of customer experience has become critical. IT teams must understand the user experience so that they can accurately direct resources in ways that will have the most significant impact on improving it.
- Consider scalability when choosing ITOps tools. Top-performing organizations are 84% more likely to select monitoring solutions by predicting future amounts of data to be processed. When an organization is looking at a new solution, it should think about how it grows with the enterprise’s vision, not only system growth to accommodate more users, but how it grows to deal with large volumes of data.
3. Organizations should have an ITOPs plan for business continuity during the COVID-19 pandemic.
Every IT organization needs to have a disaster recovery and business continuity plan in place. That’s not a new concept, according to an article in ITOps Times, but with the global emergence of a new strain of coronavirus, dubbed COVID-19, now is a good time to revisit those plans and ensure that your company can continue operating even in the worst-case scenario.
The main difference between disaster recovery plans and business continuity plans is that disaster recovery deals with how you would get a platform or service back up and running after a disaster, while business continuity is about taking care of the rest of the business.
A business continuity plan should focus on people as well as business processes and infrastructure. It should contain a holistic approach that includes an internal open chain of communication. Being able to get in touch with employees not only through Slack, but by text message, by phone call, or by personal email is essential.
Another thing to consider is that employees are likely going to be more concerned with making sure their family is safe than whatever their work commitment is. Companies need to consider and plan for possible attrition. What do you do when people are not available to do their jobs? Are there things in the office you need access to do the more important tasks of the business that would be prioritized? If there are, can they be put somewhere that is remotely accessible? Can someone have a copy of them at home? It could be any number of things; it’s going to depend on the business.
Something to keep in mind, though, is that this kind of planning shouldn’t be limited to dealing with the current situation. These are things people should have been thinking about, regardless of what is going on now.
4. A great customer experience relies on great infrastructure monitoring.
According to an article featured in The Wall Street Technical Association, customers expect a user experience, or UX, of applications that is seamless, instantaneous, and delivers on their needs—no delays, no glitches, and no downtime. Having that seamless experience depends on a reliable infrastructure. And a reliable infrastructure depends on ITOps.
Competitive businesses are continuously extending their technology footprint to include cloud, virtual machines, microservices, containers, Kubernetes, and more. It’s no longer good enough for ITOps to only have visibility at the data center. According to Flexera’s 2019 State of the Cloud Report, 84% of enterprises have a multi-cloud strategy, and 58% have a hybrid strategy, or hybrid IT infrastructure.
Problems arise when a hybrid IT infrastructure is managed by legacy monitoring tools because, according to Forrester’s September 2019 report Prevalence Of Legacy Tools Paralyzes Enterprises’ Ability To Innovate, “legacy toolsets — those with disjointed and outdated offerings (monitoring, alerting, analytics, etc.) and strategies (road map, market approach, etc.) [fail] to provide end-to-end visibility into the digital services that enterprises deliver to customers. This causes lengthened service disruptions, issues finding faults in the system, and poor customer experience, while not supporting the shift to hybrid-cloud environments or new application architectures.”
This means, if you are relying on legacy tools to monitor your hyper-convergent infrastructure, your UX is threatened. So, what should you look for when you replace or consolidate your existing legacy ITOM monitoring tools?
- Visibility into your entire hyper-convergent infrastructure—the cloud, servers, network, storage, applications, and services
- Access to actionable data that can bring context and meaning by helping ITOps see how each component works together to support applications and business services
- Integration and sharing of data in real time that facilitates automation
- Cloud-native design that will keep the infrastructure that supports critical business services running smoothly
The use of a single monitoring platform designed for the cloud can help provide visibility into the entire hyper-convergent infrastructure—the cloud, servers, network, storage, applications, and services—and help ITOps see how each component relates to each other.
Humans have always been driven to push the boundaries of our scientific and technical limits, and then push further. We do so in space and we do so in business. Business transformation is not the limit, it’s the beginning. And AIOps is the next step for mankind.
Just getting started with AIOps and want to learn more? Read the eBook, “Your Guide to Getting Started with AIOps”»