Containers & Orchestration
Docker
Docker is a tool we often use to build and run applications in consistent environments. It helps us avoid the “works on my machine” problem by ensuring the same configuration can run locally, in staging, and in production.
Best practices
When working with Docker, we aim to keep images small and efficient. This often means starting from minimal base images such as alpine
and using multi-stage builds to avoid unnecessary layers. Security is a priority, so images should be rebuilt regularly to include patches and updates. Each container should have a clear purpose, for example, a web server, a database, or a cache, rather than combining multiple roles.
Configuration should always be handled through environment variables or secret stores, never hardcoded into the image. Applications should run as non-root users whenever possible to reduce risk. Over time, containers and images can accumulate, so part of maintaining a healthy Docker setup is cleaning up unused resources to avoid wasted space and potential conflicts.
Using Docker Locally
For local development we use Docker to replicate the production setup as closely as possible. Each project defines its own configuration for the services it needs, such as the application runtime, databases, caches, and supporting tools.
There are a few common approaches we use:
docker-compose – Used to spin up multiple containers (application, database, cache, etc.) in a single stack.
Laravel Sail – A docker-compose wrapper designed for Laravel projects. It provides a ready-to-use setup for PHP, databases, and queues, which makes onboarding faster.
Devcontainer (VS Code) – Some projects include a
.devcontainer
setup that lets developers open the project in VS Code with all dependencies already configured inside a containerized environment. This ensures a consistent toolchain for the entire team.
Example workflow for local testing:
# Build the image locally from the Dockerfile in the current directorydocker build -t myapp:dev .
# Run the container and map port 3000 in the container to port 3000 on your machinedocker run -p 3000:3000 myapp:dev
Using Docker in Production
In production, Docker ensures applications run in a stable and isolated environment. Instead of being built on the server, applications are packaged into images ahead of time and pushed to a container registry.
Persistent data such as file uploads or logs is stored in named volumes, while application code and dependencies are baked into the image. Configuration is injected at runtime through environment variables or secret management solutions, ensuring that the same image can be deployed across environments.
Production deployments are handled through orchestration platforms such as Portainer or Kubernetes, which take care of scaling, restarts, and updates. This ensures that deployments are reliable, repeatable, and easier to maintain over time.