Containers & Orchestration

Orchestration

Orchestration tools manage how containers are started, stopped, updated, and scaled. While a single container can be run manually with docker run, most projects involve multiple services and need more advanced control in staging and production.

Docker Compose vs. Orchestration Tools

Docker Compose is often enough for local development and smaller staging or production setups. It allows us to define services (application, database, cache) in a single file and spin them up consistently across environments.

Orchestration tools such as Kubernetes or Docker Swarm are useful when:

  • Applications need to scale across multiple servers or regions.

  • High availability and self-healing (automatic restart/replacement of containers) are required.

  • Fine-grained control over networking, load balancing, and resource allocation is important.

For most of our projects, Compose and Portainer provide a good balance between simplicity and control. Kubernetes or Swarm are only worth the added complexity when projects truly demand it.

Setup for Staging and Production

In staging, docker-compose or Portainer is usually enough. This setup mirrors production services closely while remaining easy to reset and adjust during testing. If the staging environment is meant to replicate production exactly, or if it will be used for load testing, a more advanced orchestration setup may be justified.

In production, containers are built as immutable images and deployed with Portainer or an orchestration layer. Configuration is injected via environment variables or secret stores, and persistent data (uploads, logs, databases) is stored in named volumes or external storage services.

Scaling Strategy

Scaling typically means running multiple containers of the same service so that the application can handle more traffic or workloads. This is known as horizontal scaling.

When an application runs with multiple replicas, a load balancer is needed to distribute requests between them. The load balancer acts as a single entry point, deciding which container should handle each request. Without it, some containers would sit idle while others become overloaded.

A load balancer can also provide:

  • Health checks, so traffic is only routed to healthy containers.

  • Failover, automatically redirecting requests if one container stops responding.

  • SSL termination, handling TLS/HTTPS connections before passing traffic to containers.

In smaller setups, this role is often handled by a reverse proxy such as NGINX or Traefik. In larger environments, orchestration tools like Kubernetes include built-in load balancing mechanisms that work across nodes and clusters.

With docker-compose or Portainer, scaling usually means manually increasing the number of container replicas and placing them behind a reverse proxy. With Kubernetes, scaling can be automated with features like Horizontal Pod Autoscaling (HPA), which adds or removes containers based on CPU or memory usage.

For most projects, manual scaling combined with a reverse proxy is enough. For larger or high-traffic projects, a dedicated load balancer and orchestration system provide the automation and resilience needed to ensure stable performance.

Rolling Updates and Zero-Downtime Deployment

Zero-downtime deployment means deploying a new version of an application without interrupting service for end users. The goal is that ongoing requests finish normally and new requests are routed to an available instance, even while the update is happening.

What Zero-Downtime Means in Practice

In a traditional deployment, you might stop the application, update the code, and then restart it. During this window users would see downtime or errors. Zero-downtime aims to avoid that by ensuring at least one version of the application is always available.

Practically, this is achieved by running multiple containers (replicas) behind a load balancer or reverse proxy. The load balancer controls which container gets the traffic. During an update, new containers with the updated image are started, verified as healthy, and then added into the pool of available services. Old containers are stopped only after the new ones are serving traffic successfully.

Rolling Updates and Their Limitations

Rolling updates are one way to achieve near zero-downtime, but they are not perfect. If a container takes time to start, or if health checks aren’t configured correctly, users may still experience errors during the rollout. There is also a small window where both old and new versions are serving traffic, which can cause issues if they depend on incompatible database schemas or API versions.

Alternatives and Strategies

Kubernetes provides built-in mechanisms for rolling updates and can pause or rollback if problems are detected. But it’s not the only way to achieve zero-downtime:

  • Blue-Green Deployment – Two environments (blue and green) are maintained. The new version is deployed to the idle environment, and when it’s ready, traffic is switched over instantly. If something goes wrong, you can quickly switch back. This avoids overlap issues but requires more infrastructure.

  • Canary Deployment – The new version is gradually rolled out to a small percentage of users while most traffic continues to the old version. If no issues appear, more traffic is shifted over until the new version is fully live.

  • Reverse Proxies with Health Checks – Tools like NGINX, Traefik, or HAProxy can be configured to handle rolling restarts gracefully, ensuring that only healthy containers receive traffic. This can be done even without Kubernetes.

  • Portainer with Compose – While simpler than Kubernetes, Portainer can orchestrate updates by restarting services one at a time, which reduces downtime compared to restarting the whole stack.

When to Consider What

For small projects, a brief restart might be acceptable and simpler to manage. For projects with uptime requirements, strategies like blue-green or canary deployments provide stronger guarantees than simple rolling updates. Kubernetes helps automate these patterns, but smaller setups can still achieve them with careful planning using reverse proxies, multiple environments, and tools like Portainer.