1.) Learn the key processes and deployment considerations for implementing AIOps.
This article by BetaNews presents process and deployment considerations for AIOps.
IT production environments are an essential part of any modern business organization. Today, it’s virtually impossible for an enterprise to function effectively without a defined set of IT solutions. The amount of data managed and needed to run a business is growing exponentially, congruent with the amount of data needed to guarantee that these IT environments are always available. These two facts alone create a strong case for the Intelligent Automation (IA) of IT or AIOps, because data really is the lifeblood of modern business.
When it comes to how an organization deploys AIOps, there are two general reference models, which we refer to as bottom up or top-down deployment. Deploying AIOps via the ‘bottom up’ model means it is applied at the very foundational levels of the organizational infrastructure IT layer and across all SOPs within that framework. In the ‘top down’ model, AIOps is applied to the most critical business data flows first, and then automates others one by one.
While these two deployment models outlined are very much ‘horses for courses,’ they are not necessarily mutually exclusive. This ‘hybrid’ approach, where organizations can realize value from triaging immediate key problems areas through top-down quick fixes, whilst simultaneously committing to a bottom-up approach to AIOps deployment can, if carefully planned, presents a very good route.
2.) Traditional logging and observability may be a waste of developer’s time.
This article by The News Stack explains why traditional logging wastes developer’s time.
Cloud computing has been around for a while, but only in recent years has a full shift from on-premises servers picked up the pace. Since more data and processes could occur in the cloud, it could host more complicated applications. As those applications have gotten more complicated, so have their architectures.
To handle the many mechanisms and services newer applications used or offered, they were broken down into their own microlevel apps: microservices. Pulling all the components out of a monolith so each one could run more efficiently on its own obviously required a complex architecture to make them work together.
There are different subtypes of shifting left pertinent to testing, security, and DevOps overall. In testing, we push testing to earlier stages of development to get quicker feedback. In security, we start to secure a new app way before release—and reiterate that security as the app is built.
What developers need is to continue pushing forward on the leftward front: Find new ways to minimize troubleshooting time by absorbing it into the default workflow of every stage of the development process. The goal is to ultimately make it easier for businesses to understand their own software and narrow the gaps between indicating a code-related problem affecting performance, pinpointing the direct issue within the line of code, and deploying a solution quickly for a seamless customer experience without having to write more code or redeploy the application.
3.) Find out more about the evolution of observability and what to expect in the future.
According to Sequoiacap, the definition of observability, in any system, is the ability to measure its internal state by examining the output of associated sensors. Observability for software systems boils down to observing the performance of each of the underlying components in play at a granular level – from managed services to physical servers or containers to code and configuration–while understanding their many interconnections and data communication flows.
We expect observability companies of this decade to incorporate the following four elements:
- Services, not servers, as the unit of observation;
- Unification of underlying data types;
- Real-time mapping of system topology and interconnections; and
- Intelligence and automation in observability.
In closing, observability in 2030 will be far more advanced, intelligent, automated, and unified than it is today in 2022.
4.) ScienceLogic acquires machine learning analytics provider, Zebrium to boost root cause visibility.
This article by Yahoo Finance announces ScienceLogic’s acquisition of the machine learning provider, Zebrium to improve root-cause visibility.
Moving toward its goal of freeing up resources of enterprise IT teams and optimizing digital experiences, ScienceLogic has acquired machine learning analytics firm Zebrium to automatically find the root cause of complex, modern application problems. This partnership drastically reduces the time it takes to identify, diagnose, and resolve business-service impacting issues, lowering IT costs, and delivering superior customer and employee experiences.
“Our acquisition of Zebrium has its genesis in those customer mandates and stems from years of conversations with partners and clients to understand where the gaps are and how ScienceLogic can help fill them,” says Mike Nappi, CPO at ScienceLogic. “Combining our capabilities with that of Zebrium creates a whole new level of analytics-driven insights and automation we can bring to bear for our customers.”
CEO at Zebrium, Ajay Singh, stated “With our machine learning capabilities combined with ScienceLogic’s service context and automation, organizations can greatly reduce the time they spend identifying and remediating issues – leaving them more time to spend on operations that deliver stellar digital experiences for customers and employees alike.”
Just getting started with AIOps and want to learn more? Read the eBook, “Your Guide to Getting Started with AIOps”»