In IT, the only constant is change, especially in today’s era. With the advent and widespread adoption of containers, virtual machines and the cloud, it’s harder than ever to maintain a constant vigil of what’s going within your IT ecosystem.
For years, organizations around the world relied on “The Big Four” (BMC®, IBM, HP and CA) to monitor their infrastructure. But with their departure from the market and increasingly obsolete platforms, companies that have moved their operations, services and workloads to the cloud are finding themselves adrift in a perfect storm.
On one front, there’s the issue of the cloud: The cloud has provided organizations with a proven means of cutting costs upfront while offering highly sought-after operational dexterity. But despite its benefits, cloud migration means workloads – and their associated toolsets – require constant monitoring to avoid cloud sprawl.
And on the second front are the technological advances that ITOM professionals are using to make the cloud adoption and migration happen. With containers, orchestrators and a push to use cheaper, virtualized environments, their very ephemeral nature means that, as Gartner put it in their paper titled, How to React to the Impact of the Cloud on IT Operations Monitoring, “The familiar tools and methods that were previously employed cannot be applied in cloud environments.”
The confluence of unfortunate events (departure of “The Big Four,” accelerated cloud use, and technological advances) couldn’t come at a worse time for organizations. To understand what’s in the cloud and gain the visibility necessary to monitor service quality, it’s important for organizations to focus on their processes and the people implementing them.
Trust the Process?
Before you rush to move everything to the cloud, you have to establish a process that takes into account what you already have, then document scenarios that detail what happens when things go wrong; and trust me, things will go wrong.
Find out how ScienceLogic can help you tame cloud sprawl.[/box]
Although this three-step process might seem obvious, you’d be amazed by how many companies just move everything to the cloud and wish for the best.
But hope is not a strategy.
Gartner goes on to say, “I&O leaders migrating to the cloud must understand that, even though they are outsourcing much of the complexity with IT infrastructure, it will introduce new gaps in monitoring visibility.” Depending on your needs and the cloud technology that you’re employing, you could encounter significant gaps and lose complete visibility into which devices are where and what they’re connected to.
Worse yet, if your organization is like so many that suffer from silos, then the type of technology that you employ could expose communication lapses and doom the entire IT organization. For this reason, it’s essential to have the tough conversations upfront and identify everybody’s needs.
One of the initiators of that conversation needs to be discussing the appropriate policies and procedures.
The ultimate determinant of your success or failure, regardless of your platform, has a lot to do with your policies and procedures. Ideally, procedures are included in your onboarding process and are continually revisited to ensure they sync up with your technological capabilities. And since every organization has its own culture and way of doing things, it’s vital for the final product to be documented, formalized, and understood by every member of the IT team.
For instance, what’s the waterfall sequence of events necessary if a device goes down? Before you say, “check and troubleshoot,” what does troubleshoot actually look like? You can ping the device, but what if that fails? We could play this scenario out until its conclusion, but you get the gist.
Establishing procedures ensures that EVERYBODY is taking the appropriate steps and you won’t have to bother the person who’s on call. You cannot make it up as you go along – the lack of a unified process could have unintended consequences on the entire organization.
Once your procedures have been outlined and implemented, it’s up to the team to carry them out. And here’s where your investment really begins to show.
People
It’s trite but true, people are the backbone of your organization.
Regardless of how easy or complicated your monitoring solution maybe, you have to account for the human element to make it work. For years, IT Ops has been divided into teams that are singularly focused on their one task and cannot see the forest for the trees. The result has been extreme expertise in one domain and relying on a monitoring tool that can accommodate that skillset.
But with today’s mass migration to the cloud – and all of its potential visibility gaps – ITOps has to be cross-trained and understand more than their domain. The increasing organizational agility that’s required to keep up with consumer expectations and market opportunities means single-specialization is a thing of the past.
Although there’s no silver bullet to finding a worker who can step in and provide all of the skills necessary for cloud monitoring (especially in today’s competitive market), cloud monitoring tools might already exist within your organization. Ask for recommendations or identify current employees who show potential to work outside of their current scope and then perform a skills assessment for aptitude.
As Gartner wrote, “The IT-monitoring person of the near future will need to be familiar across the infrastructure (including cloud and on-premises), network and application layers, with a keen understanding of how to work with security.” Cross-training also means your team won’t be threatened when you begin to automate, which will only enable them to learn new technologies and move beyond mundane and repetitive tasks.
In our next – and final – blog we’ll discuss the price tag associated with tools modernization and how your companies can maximize its resources without stretching resources too thin.