Professional Documents
Culture Documents
Content ReportContainer2023
Content ReportContainer2023
Since our inaugural report in 2015, Datadog’s container reports have illustrated customers’
adoption of containers, as well as how they have evolved and expanded their usage to
support their applications and businesses. This year’s report builds on the previous
edition of this article, which was published in November 2022. You can click here to
download the graphs for each fact.
Serverless container adoption is growing across all major clouds, but Google Cloud leads
the pack. In Google Cloud, 68 percent of container organizations now use serverless
containers, up from 35 percent two years ago. This growth likely stems from the August
2022 release of the 2nd generation of Cloud Functions, which is built on top of Cloud Run.
You can learn more about the growth of functions packaged as containers in this year’s
serverless report.
Chen Goldberg
GM & VP, Cloud Runtimes, Google Cloud
To kick-start their AI/ML workflows, teams can use prepackaged container images, such
as AWS Deep Learning Containers—or they can adopt a managed Kubernetes service
that enables them to allocate GPUs to their containerized workloads. We believe that,
as investment in next-generation AI-based applications expands and the amount of
unstructured data required for their models grows, organizations will increasingly run
GPU-based workloads on containers to improve their development agility and better
harvest insights from their data.
Tony Tzeng
Chief Product Officer, OctoML
FACT 3
Johan Andersen
VP of Engineering, Infrastructure and Reliability, Datadog
FACT 4
We believe that HPA’s popularity is due to the fact that Kubernetes has released
significant enhancements to the feature over time. When HPA was introduced, it only
allowed users to autoscale pods off of basic metrics like CPU, but with the release of v1.10,
it added support for external metrics. As the Kubernetes community continues to enrich
HPA capabilities, many organizations are adopting new releases earlier to fine-tune their
autoscaling strategy. For example, HPA now supports the ContainerResource type
metric (introduced as a beta feature in v1.27), which allows users to scale workloads
more granularly based on the resource usage of key containers, instead of entire pods.
FACT 6
Over the years, the container ecosystem has matured to meet the needs of organizations
looking to deploy stateful applications on containers. With the release of StatefulSets in
Kubernetes v1.9, organizations were able to persist data upon the relaunch of pods, and
additional features such as volume snapshots and dynamic volume provisioning enabled
them to back up their data and remove the need to pre-provision storage. Cloud providers
such as AWS now provide built-in support for running stateful workloads on containers—
including serverless services like EKS on Fargate—while open source tools like K8ssandra
also make it easier to deploy databases in Kubernetes environments.
Melissa Logan
Managing Director, Data on Kubernetes Community
Java claims a large market share of enterprise applications and continues to be the most
popular language in non-containerized environments. Based on our conversations with
customers, many of them have begun (or are in the process of) migrating their Java-based
legacy applications to run on containers. We expect to see future growth of Java usage
in container environments, driven by the modernization of enterprise applications and
the development of container-focused features (such as OpenJDK’s container awareness).
One reason why service meshes are more popular in large environments is likely because
they help organizations address the challenges of managing services’ communication
pathways, security, and observability at scale. Service meshes provide built-in solutions
that reduce the complexity of implementing features like mutual TLS, load balancing, and
cross-cluster communication. We believe that, as more organizations migrate existing
services to containers and expand their node footprint, service meshes will continue to gain
traction, particularly in large-scale deployments.
Though Docker support has been deprecated since Kubernetes v1.24, teams that aren’t
ready to migrate to a new runtime can still use Docker via the cri-dockerd adapter, which
likely explains the runtime’s high usage rate. However, as more teams upgrade to newer
versions of Kubernetes and roadmap their environments with future support in mind, we
expect containerd to overtake Docker as the predominant runtime.
“Since the Kubernetes project evolved its built-in support for Docker
by removing dockershim in Kubernetes release v1.24, it was only a
matter of time before we saw a rise in more container deployments
with containerd. The containerd runtime is lightweight in nature
and strongly supported by the open source community. Containerd
evolved out of the Docker engine and is now one of the top
graduated projects at CNCF, used by most hyperscalers for their
managed Kubernetes offerings.”
Chris Aniszczyk
CTO, Cloud Native Computing Foundation
Today, Kubernetes v1.24 (16 months old at the time of writing) is the most popular release,
which aligns with historical trends. However, this year, we’ve seen a marked increase in
the adoption of newer versions of Kubernetes. Forty percent of Kubernetes organizations
are using versions (v1.25+) that are approximately a year old or less—a significant
improvement compared to 5 percent a year ago.
We’ve heard from customers that many are upgrading to newer releases earlier to gain
access to features such as Service Internal Traffic Policy (released in v1.26) and the ability
to configure Horizontal Pod Autoscaling based on individual containers’ resource usage
(released in beta in v1.27). These features give users more granular control over their
clusters, which can help reduce operating costs. Managed Kubernetes services also play a
role in helping users upgrade their clusters more quickly (e.g., by default, GKE Autopilot
automatically upgrades clusters to the latest Kubernetes version a few months after it has
been released). We expect the adoption of Kubernetes releases to continue to shift left as
more organizations adopt managed services like Autopilot and upgrade their workloads to
take advantage of new Kubernetes feature states. One way they can do this safely is by
upgrading non-mission critical workloads prior to deploying new releases more widely
across production environments.
COUNTING We excluded the Datadog Agent and Kubernetes pause containers from this investigation.
FACT 1 For this report, we consider an organization to be a container organization if it runs cloud
provider-managed or self-managed containers from either Kubernetes or non-Kubernetes-
based services.
FACT 2 We measured usage of containerized instances that were GPU-based. We considered the
following instance types to be GPU-based:
– AWS: F1, G3, G4, G5, Inf1, Inf2, P3, P4, P5, Trn1
– Azure: standard_n
– GCP: g2, a2
Container Report 15
15 datadog.com
FACT 6 For this fact, we grouped workloads into the following categories, based on open source
container image names:
– CI/CD: GitLab, Argo CD, Jenkins, Flux, GoCD, Keptn, GitHub Actions, Argo Rollouts,
Tekton, TeamCity, CircleCI, Travis CI, Bamboo.
– Web servers: Apache HTTP Server, Apache Tomcat, NGINX, CentOS Stream, LiteSpeed
Web Server, Caddy, Lighttpd, Microsoft IIS, Oracle WebLogic Server, OpenResty,
Apache Geronimo.
FACT 8 We considered organizations to be using a service mesh if they were running at least one
container with an image name that corresponded to one of the following technologies:
Istio, Linkerd, Consul Connect, Traefik Mesh, NGINX Service Mesh, AWS App Mesh, Kong
Mesh, Kuma Mesh, Cilium Service Mesh, OpenShift Service Mesh, Meshery, or Gloo Mesh.
Container Report 16
16 datadog.com