What is converged infrastructure?

Converged infrastructure (CI) is a hardware-focused, building-block approach for minimizing compatibility issues between storage systems, servers, and network devices to reduce complex deployments and overall costs. Also referred to as converged architecture, converged infrastructure represents the convergence of compute, storage, and networking infrastructure in the data center. Converged infrastructure systems are usually purchased from one company instead of buying components separately from different suppliers.

Where a traditional storage system involves a controller and rack of shelves with solid-state drives or hard disk drives arrays, converged infrastructure solutions consolidate these components into a single node-based platform. Everything is stored in a single box where you can scale by adding more nodes when needed.

Converged infrastructure systems are ideal for organizations that already have a network and compute platform but intend to deploy different types of storage. Offloading the workload to a converged infrastructure system ensures there is a viable data storage solution and comes with many benefits including:

  • Improved integration;
  • Faster provisioning and IT response;
  • Simplified management infrastructure;
  • Lowered costs;
  • Scalable storage capacity; and,
  • An easier path to the cloud.

Converged Infrastructure vs. Hyperconverged Infrastructure

Converged infrastructure systems are composed of building blocks. Each of these building blocks has the capability to be used on its own for its intended purpose. On the other hand, Hyperconverged Infrastructure (HCI) solutions share the same goal, with a different approach. The difference between hyperconverged infrastructures and converged infrastructure is that hyperconverged infrastructure is software-defined with all the technology integrated. This means that, unlike converged infrastructures, hyperconverged infrastructures cannot be broken down into separate components.

Hyperconverged infrastructure aggregates compute, Storage Area Network (SAN), and storage functionality into modular appliances that are based on commodity x86 hardware, which can be customized and scaled by adding additional appliance nodes. Hyperconverged infrastructure is the most recent development in data center management that enables data centers to transform from cost centers to sources of innovation with the potential to increase business value. Hyperconvergence effectively evolves convergence, moving infrastructure into a software-defined data center, improving utilization rates, bolstering IT agility, and enabling management from a single console. Hyperconverged infrastructure provides optimal value because design, delivery, and support are all managed and maintained by one vendor. Here are some other benefits of hyperconverged infrastructures for enterprises who shift from their current converged infrastructure setup:

  • Accelerated performance with integrated virtualization;
  • Higher efficiency in DCM;
  • Utilize a single pane of glass to reduce complexity for managing resources;
  • Staff have more dedicated time to focus on business goals;
  • Ability to support more differentiated workloads;
  • Upgrades and maintenance with zero downtime; and
  • Workload balancing based on data utilization.

Hyperconverged infrastructure is a different breed of data centers – smaller, tighter, leaner, and more robust than converged infrastructures or any other standard data center configuration. It is recommended for enterprises to migrate to an hyperconverged infrastructure because of the ongoing ability to run infrastructure resiliency and reliability tests.

History of Converged Infrastructure

Computing infrastructures started as a merged, all-in-one mainframe. From the 1980s to 1990s, the centralized mainframe was dismantled, and computing infrastructures were disassembled and fragmented into individual storage, compute, and networking components to achieve the same result. Then during the early 2000s, a need for improved compute resources and data storage emerged as the necessity for new applications increased. Companies would have to deploy servers and storage every time they required a new application. This caused silos to form that either supported one type of computing technology or a particular line of business application. What made matters worse was each IT domain needed to be monitored, analyzed, and supported in its respective silo.

As this problematic trend continued, most of the different hardware and software configurations were being supported by a single company. However, this approach was expensive and impractical with almost two thirds of IT budgets being dedicated to maintenance and operations, leaving only an exceedingly small amount for new projects, strategy, and innovation. Once cloud computing was introduced into the equation, this did not resolve the problem. Instead, the cloud often added to these silos because it became progressively easier for the creation of new instances, without consideration of their impact on the IT department. This left companies feeling like their IT infrastructure was spiraling out of control. This created the need for a solution where you could create a pool of virtualized servers, storage, and networking capacity that could be shared amongst workloads and applications.

This was when the Converged Infrastructure wave started back up. The emergence of consumer clouds, consumerization of IT, rapid uptake of server virtualization, and an overly labor-intensive IT economy all played a part in the shift to converged infrastructure. In addition, Oracle launched a new product in 2008, which was the first return of the converged approach. The product was called HP oracle database machine, which was a single SKU box that emphasized integration, engineering for performance, and pre-configured systems. And once Google and Amazon Web Services demonstrated the viability of cloud computing in 2010, enterprises began focusing more on Converged Infrastructure solutions. Over the past decade, the converged approach has been validated with $3 billion spent worldwide on integrated systems.

What is the purpose of converged infrastructure solutions?

The main purpose of converged infrastructure is to scale and bring new services to the market at a rapid pace. This is achieved by:

  • Reducing the complexity of deployments: Promotes an agile setting for new service deployments;
  • Validating configurations: Limits the amount of guesswork and provides templates for new application instances; and,
  • Providing lower costs: Reduces hours for repetitive installation tasks and system testing.

A converged infrastructure makes the traditional deployment process less complicated by offering pre-configured settings on hardware devices, which is inclusive of security systems or firewall appliances with their own proprietary network management software. Converged solutions are also capable of being deployed and hosted on-premises, with system administration within a single web server. These sector-specific packages are usually inclusive of routers, cables, and networking equipment to run a Wide Area Network at scale.

Deploying data center infrastructure as a complete system simplifies and accelerates the deployment of resources. Converged technology acts as a platform for repeatable, modular deployment of data center resources for rapid scale and consistent performance. This is desirable for virtualized environments where standardized foundations require virtual resources. Converged Infrastructure solutions also play a part in reducing deployment risk by offering vendor validated solutions to limit the amount of guesswork and speed up the time to application deployment with these trusted infrastructure platforms.

 

« Back to Glossary Index
X