Container platform strategy: 6 best practices for I&O leaders
I&O leaders must build a strategy and a business case for deploying containers
By Arun Chandrasekaran
“Let’s do something with containers to speed up application delivery.” I&O leaders globally are currently dealing with this demand from their C-level. But the rapid adoption of container technology does not necessarily mean that it is a fit for your organization.
Containers can help enterprises modernize legacy applications and create new cloud-native applications that are both scalable and agile. Container engines such as Docker and orchestration frameworks such as Kubernetes, provide a standardized way to package applications — including the code, runtime and libraries — and to run them in a consistent manner across the entire software development life cycle.
Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 30% today. However, the current container ecosystem is still immature, and organizations must ensure that the business case is solid enough for the additional level of complexity and cost that it will entail to deploy containers in production.
Although there is growing interest and rapid adoption of containers, running them in production requires a steep learning curve due to technology immaturity and lack of operational know-how. I&O teams will need to ensure the security and isolation of containers in production environments while simultaneously mitigating operational concerns around availability, performance and integrity of container environments.
Below are six key elements that should be part of a container platform strategy to help I&O leaders mitigate the challenges of deploying containers in production environments.
Security and governance
Security should be embedded in the DevOps process, and the containerized environment must be secured across the entire life cycle. This includes the build and development process, deployment and the run phase of an application. To prevent malicious activities, I&O leaders should invest in security products that provide whitelisting, behavioral monitoring and anomaly detection.
Monitoring
The deployment of cloud-native applications shifts the focus to container-specific and service-oriented monitoring (from host-based) to ensure compliance with resiliency and performance service-level agreements. Focus on monitoring a container and across containers at a service level. You want to monitor the applications, rather than the physical host.
Storage
Consider two separate cases for storage: If the primary use case is “lift and shift” of legacy applications, there may be little change in storage needs. However, to refactor the application or create a new, microservice-oriented application, the organization needs a storage platform that is integrated with the developer workflow and can maximize the agility, performance and availability of that workload.
Networking
The portability and short-lived life cycle of containers will overwhelm the traditional networking stack. The native container networking stack doesn’t have robust-enough access and policy management capabilities. I&O teams must therefore eliminate manual network provisioning within containerized environments, enable agility through network automation, and provide developers with proper tools and sufficient flexibility.
Container life cycle management
Containers present the potential for sprawl even more severe than many virtual machine deployments caused. This complexity is often intensified by many layers of services and tooling. Container life cycle management can be automated through a close tie-in with continuous integration/continuous delivery processes. Together with continuous configuration automation tools they can automate infrastructure deployment and operational tasks.
Container orchestration
The key functionality for container deployment is provided at the orchestration and scheduling layers. The orchestration layer interfaces with the application, keeps the containers running in the desired state and maintains service-level agreements. Scheduling places the containers on the most optimal hosts in a cluster, as prescribed by the requirements of the orchestration layer.
Kubernetes has emerged as the de facto standard for container scheduling and orchestration, with a vibrant community and support from most of the leading commercial vendors. Customers should decide on the right consumption model for Kubernetes by carefully evaluating the tradeoffs between CaaS (containers as a-service) vs. PaaS (platform as a- service) as well as hybrid vs. cloud-native services.
The author of the article is Distinguished VP Analyst at Gartner.