Serverless or Containers? Here’s What You Need to Consider

If you’re trying to decide between adopting a serverless or container architecture, here are some things for you to consider.

Raj Patnam Vice President, Global Solutions

In the never-ending battle for technology platform of choice, serverless vs. container looks to be gaining steam as we enter 2019.  To be sure, both have their advantages, disadvantages, and proponents.  Serverless is being pushed heavily by AWS and Azure, while the container approach is being pushed by Google, Cisco, and IBM, (Both AWS and Azure offer services for both native containers and Kubernetes).

The natural inclination in the industry is to move to the latest and greatest technology as fast as possible, usually because of significant advantages in simplicity, price and time. Plus, it’s simply more fun as an engineer to play and learn as the tech grows itself.

But as we saw in the growth of cloud, and now containers, this migration isn’t always seamless.

Image courtesy of

In fact, we’ve seen both Microsoft (Azure Stack) and Amazon (Outposts) introduce products to work within someone’s existing data center. Migrations haven’t returned the value that was promised because cost, security, regulatory needs, and complexity haven’t been as seamless as we all once imagined. The move to containers and serverless architectures will face some of the same dilemmas as cloud, but this is setting itself up to be 2019’s enterprise version of VHS vs. Betamax.

Let’s examine some of the differences between the two:

New Apps:

Serverless technology lends itself to exciting possibilities for new applications, especially in the IoT, physical security and chatbot worlds. The ability to use triggers to kick off various actions (without having any underlying infrastructure) and grow at scale allows for both a cost-effective and simplified management scenario. Applications that are listening for triggers can execute code in an IoT environment as it changes, reducing the costs, and simplifying the software management for a company that may not have a giant DevOps staff.

Containers, on the other hand, are useful when the application needs to do more than listening and require multiple instances running at the same time. The hard parameters on the execution time also limit long-running and complicated processes from finishing, making large data crunching particularly challenging, whereas with a container the spawning of new containers to split workload is simpler and handled automatically.

Native Cloud Apps:

Migrating existing native cloud applications is really a question about the nature of the application.  If you’ve purpose-built your applications with a singular public cloud vendor, then integrating and even migrating applications to serverless becomes very easy. All the major serverless platforms have built-in hooks to other native cloud services and allow for quick and seamless architecture changes. Specifically, applications that relied upon orchestration tools for quick scale up and down may find the simplicity that serverless provides to be very advantageous.

The argument for containers is like that of serverless: gain functionality from existing cloud services and improve orchestrated scaling. But the difference is in the maturity of containers versus serverless.  Container testing is much more advanced, and the number of people with the required skill set to efficiently architect the solution is far greater than with serverless. But most applications today are hybrid or multi-cloud/multi-architecture, and the differences here vary even greater.

Legacy Hybrid/On-Prem Apps:

The last ten years have shown the world the complexity with lift and shift, and the high costs associated with such a cloud strategy. Instead, we’ve seen the rise of Hybrid applications that run on multiple different clouds and may work with the existing on-premise solution providing some functionality.  Serverless can provide a faster path to the cloud; however, only if the time to architect the solution correctly has been applied. Legacy applications that perform on-demand quick checks and tasks can easily be ported towards a serverless solution, thus reducing the complexity of managing a larger, more complicated code base and physical footprint – and a number of these legacy mainframe tasks have already been ported successfully into serverless apps.

Containers offer a more straightforward fashion to migrating your legacy technologies, with far less architectural or vendor dependence.  The single biggest long-term hurdle for serverless growth is the vendor lock-in to one of the big 3 (Amazon, Microsoft & Google), as the technologies are not yet truly portable. Contrast that with Kubernetes running on top of various clouds to reduce the need to use a particular cloud vendor and adoption by companies like Cisco and IBM as part of their overall cloud strategy.  By contrast, the serverless approach is largely platform dependent, and thus being pushed heavily by the incumbent IaaS providers as a means to keep you on their platform.


In the short- term, it seems that serverless is still in its infancy and best suited to purpose-built applications and in specific domains. But it has tremendous upside and value as an alternative approach to handling large-scale complexity and heavy architecture costs. The recent announcement by Amazon at AWS hints they see a similar set of needs to complement Lambda and other serverless technologies in the future and the keen interest in serverless will only sprout up solutions to these problems over time.

One key consideration in your decision making is your ability to be flexible and change models or adapt hybrid solutions as necessary.  ScienceLogic’s SL1 platform is uniquely capable of providing context and automation across your IT landscape by enabling visibility, incident automation and remediation across multiple domains including microservices and containers, public and private cloud, serverless computing, network, and on-premise computing.  This total visibility allows your organization to better understand how various architectures affect your users, and proactively alert your operations teams about problems before they affect your services.