- Main Menu
- By Industry
- By Solution
- By Use Case
How Federal Agencies Can Mitigate IT Risks During COVID-19
Our CEO, Dave Link, sat down with the Federal News Network & Shannon Hulbert, CEO of Opus Interactive, to discuss the challenges & opportunities of working remotely during the pandemic.
Absolutely. It all comes down to distributing workloads. Hybrid is just like it sounds—figuring out where those workloads belong and what kind of environment they belong in. Public cloud is shared resource pools, and private cloud is dedicated infrastructure. Multi-cloud is when you’re using, not just one, but many clouds to figure out where those workloads belong and perform the best.
Federal News Network:
About 15 years ago, we talked about moving to the cloud like it’s singular. But there’s not just one cloud anymore, is there?
Hybrid cloud has really become synonymous with an interesting architecture that the government’s making great use of. It is where they put the web tier layer that a user would access through an application on the public hyper-scalers or public cloud environments. They host the application front-end there, but the backend—that may hold the database and other data infrastructure that they’ve long held in private facilities—can still stay where it is. These are now compound architectures that really have to work as one system.
Federal News Network:
When I go to the ScienceLogic Twitter feed, I see MTTR. What the heck is this?
Mean time to repair (MTTR) is how long it takes typically between a failure and, more importantly, the ability to respond and resolve the issue. Our product, SL1, is focused on helping people understand the dashboard of the cockpit of how systems are behaving so that you can operate and run those systems at maximum capacity and availability. Most importantly, SL1 delivers the information you need at the right moment and time for each use case when problems occur.
ITOps teams spend a lot of time making sure they have the right analytics to deliver all of those different services. Often, they’re complex services that work together with one another but should be seamless to you the end user. You want to have access to an application so that you can be productive and do your job—getting the information that you need at any moment in time. Our role is to help the operations teams uplift the way that they’re using and automating many of the tools that we use to figure out when there is a problem, how do we fix it, and/or how do we predict the problem so that we can fix it before there is a service outage in the first place.
Federal News Network:
What’s the most important thing that our federal listeners can take away from this discussion as far as managing a data center and worrying about failure?
I think agencies need to think about driving similar practices of automation and visibility into operations. To some who have built adopting practices on DevOps and Agile as you’re refactoring applications, you have to do the same to operations.
It starts with collecting accurate and timely data into a large, real-time data lake that has all the configuration health and performance data. This is especially true for larger agencies who are undergoing cloud migrations and major transformations architecturally. They need to have visibility, not only across the cloud, but on-premises and multi-cloud.
What this pandemic has taught us is that the surge in people working from home has put a lot of pressure on the operations teams to provide even better service quality when they’re remote to the data center. You need a consolidation of tools so that you can reduce complexity—enabling you to better automate events and understand service impact. And this enables you to deliver a better experience to the end users as they’re all working different hours and different schedules than they were in a nine-to-five world that we knew of six to nine months ago.
Federal News Network:
I talked to the people from Akamai, and they’ve seen a 25% increase in network traffic since COVID. And Zoom has gone from 10 million to 200 million users. AT&T has reported that VPN demand has gone up 700%. From the perspective of a federal IT professional, how have you handled such an increase in usage, and have you used tools from ScienceLogic to help you handle that?
Oh yeah, definitely. We have actually been a partner with ScienceLogic for nearly a decade now, and even going into this year hybrid was the new norm. Everybody was trying to figure out where those workloads go. For us, it’s just a matter of figuring out which items are aware and then what it is that we’re really looking for with ScienceLogic. Also, what we can offer for agencies is that health and performance monitoring—being able to map, monitor, and manage your entire IT ecosystem. We enable federal agencies the ability to understand what the dependencies are, to map out those dependencies, as well as policy enforcement and role-specific management.
In addition, there’s an AI/ML component on the backend that’s really looking at what’s contributing to that healthy ecosystem. For Opus, we actually are a customer of ScienceLogic, but then for our federal customers, we’re able to offer that ScienceLogic monitoring hosted within our FedRAMP moderate environment that is then living inside of those FISMA High facilities.
One thing that we’ve seen is that operations teams have enormous pressure now. And not just as they’re thinking through where applications should live and be delivered from, but also how to manage the old and the new at the same time.
Things are moving too fast for operations to rely on bespoke technology-based management tools. You really need a consolidated tool, and that’s what Opus has done. Delivering our product in a FedRAMP moderate infrastructure allows any federal agency a very quick way to modernize operations, to see that service view across the entire state very efficiently.
That really gets you to the next phase, which is automation. We have a product called PowerFlow that helps customers automate a myriad of manual tasks. As systems are moving around, they’re moving at machine speed, so you need machines to make intelligent decisions on how to create the right automation to deliver that service quality.
When we think about the future, we think about how better instrumentation and better analytics deliver better resiliency and results with a cloud-based compute environment with all the resiliency, but only if you know how to proactively manage it and keep it aligned to the user needs.