The past few days have alerted us in the ITOps domains of a major vulnerability that left multiple departments within the U.S. government, and the security company FireEye, exposed to a major hack, where data and tools were stolen. So, what happened exactly?
According to the Associated Press, all the attacks are being investigated, but national state-sponsored hackers are thought to be responsible and were orchestrated via malware on a widely used network monitoring solution.
While reading the shocking and unpleasant news this weekend three questions came top of mind to further explore:
- How has information security been affected by Covid-19?
- “God Mode” and what happens when IT tools get in the wrong hands?
- What are we doing at ScienceLogic to make sure our customers are best prepared for these problems?
How has Covid-19 changed the InfoSec landscape?
With more users working from home, accessing systems in a remote fashion along with partners and customers, InfoSec has been under extreme pressure due to Covid-19. Security Magazine says cybercrime has increased 63% since the start of COVID. Thinking about this extreme challenge in a remote environment, MIT, in August, published an article discussing some of the things they’ve seen that have changed. The following two ideas predict the nature of the vulnerability and attack that effected both FireEye and the U.S. Commerce Department:
- Ransomware and malware attacks. Netwalker, a strain of ransomware, is using files with coronavirus in the name so that they look important. The files embed code that will encrypt your files.”
- We’ve done two years of digital transformation in two weeks,” said Andrew Stanley, chief information security officer at Mars, at the July CIO Symposium. The real risk I’ve seen an increase is [in the use of] third parties.”
However, pinpointing an area of an issue versus predicting a major state-sponsored attack are two different things entirely.
Is God Mode making you vulnerable?
God Mode is a fun term in the aforementioned Associated Press release, referring to the nearly unlimited visibility that a monitoring solution provides due to the credential (user/pass) data that exists within monitoring systems for both Read and Write access. This was compounded by the fact that many organizations do not keep different access lists or credentials for Read vs. Write access. My friends in the security business immediately asked,” Why have tools that have these massive lists of access in the first place?” Because monitoring is essential for both operations and security. For example, if we know disks are running out of space, or the routing rules have changed and do nothing, you’ll both lose access to your application while creating a new security vulnerability.
Monitoring tools and automation are essential to managing modern environments, and in today’s COVID-driven world, automation tools are also becoming far more important. So what does this really mean? Today, 80% of all hacks occur within the environment, and tools with wide access require InfoSec to look at observability and automation tools similar to internal privileged users. The tools and the users are essential to operations, but the processes used to secure these tools should follow extreme security standards. Processes for credential refreshes, need to be applied to service accounts in the same way user security standards are applied. There’s no reason for God Mode to exist for any user, if the underlying processes are not fixed, the same exact attack could occur via an internal employee vs. an outside hacker.
A key best practice all organizations need to follow is the Principle of Least Privilege, which states that a subject should be given only those privileges needed for it to complete its task. This means that if an individual account becomes compromised, the scope of access that account has is limited to a specific set of systems. In the sphere of observability, this means narrow, read-only accounts for monitoring purposes, and automation accounts only enabled with the very specific rights required for the configuration changes they support.
How is ScienceLogic SL1 defending against vulnerabilities?
The task of creating secure software never ends, attack vectors are continually evolving, and at ScienceLogic, adapting to these threats is never far from our mind. We’ve built our platform based on a number of security best practices to protect our customers. Having managed highly secure federal and civilian environments for nearly two decades we’ve learned and added a lot to our solution. Some of this stems from delivering our complete platform as an integrated “virtual appliance”. The operating system is stripped back to just the packages that SL1 needs to run: less code means less room for exploits. The platform has a built-in firewall, and only allows access in or out to the services need to support observability and automation.
We support scaling out data ingestion through the addition of “collectors”. These are connected to the central SL1 platform via encrypted connections, with the option to establish network connectivity from the collector or the SL1 platform according to the security needs of the deployment environment. Naturally, we encrypt all credentials and restrict access to view credentials in our product to prevent their use for unauthorized purposes. Access to both the GUI and API can further be controlled by SSO, to add an additional layer of security to God Mode.
We’ve put a lot of emphasis on third-party validation of our software security implementation and company protocols. That includes spending weeks “in the desert” as we put the platform through the exacting approval process for the U.S. Department of Defense Approved Product List, a process that we renew by going through a detailed and thorough evaluation. We are also SOC 2 certified and use a team of third-party security experts to perform independent penetration tests against our platform.