DevOps Tools
Docker for Beginners: Your First Container
Docker packages your application and everything it needs to run -- code, dependencies, system libraries, configuration -- into a single portable unit called a container. That container runs identically on your laptop, your colleague's laptop, a CI/CD server, and a production cloud instance. No more "it works on my machine." No more dependency conflicts. No more spending hours configuring environments.
If you are learning DevOps or cloud engineering, Docker is one of the most important tools in your toolkit. This guide takes you from zero to running your first container, with practical examples you can follow along with right now.
What problem does Docker solve?
Before Docker, deploying software was painful. An application might require Python 3.9, a specific version of PostgreSQL, three system libraries, and a particular version of OpenSSL. Setting all of this up on a development machine was tedious. Setting it up identically on a staging server, a CI/CD runner, and three production servers was a nightmare.
Developers would spend hours debugging issues that came down to "the production server has a different version of libssl." Operations teams would maintain complex setup scripts that broke whenever a dependency changed.
Docker eliminates this entire category of problems. You define the exact environment your application needs in a Dockerfile. Docker builds that environment into an image. Anyone, anywhere, can run that image and get the exact same result.
This is why Docker transformed the software industry. It made deployments reproducible, environments portable, and the "works on my machine" excuse obsolete.
Containers vs virtual machines
Docker containers are often compared to virtual machines, but they work fundamentally differently.
Virtual machines
A virtual machine (VM) runs a complete operating system on virtualised hardware. Each VM has its own kernel, its own system libraries, and its own copy of everything the OS provides. A VM running Ubuntu takes up gigabytes of disk space and minutes to boot.
Containers
A container shares the host operating system's kernel. It only contains the application and its dependencies -- not a full OS. A container based on Alpine Linux might be 5MB. It starts in seconds, not minutes.
| Feature | Virtual machine | Container |
|---|---|---|
| Size | Gigabytes | Megabytes |
| Boot time | Minutes | Seconds |
| Isolation | Full OS isolation | Process-level isolation |
| Resource usage | Heavy (each VM runs full OS) | Light (shares host kernel) |
| Portability | Less portable (hypervisor-specific) | Highly portable (runs anywhere Docker runs) |
| Use case | Running different OS types | Packaging and deploying applications |
Containers are not a replacement for VMs in every scenario. VMs provide stronger isolation and can run different operating systems (Linux VMs on a Windows host). But for application deployment -- which is what DevOps engineers do most -- containers are superior.
Key Docker concepts
Images
An image is a read-only template that contains everything needed to run an application: the base operating system, application code, dependencies, and configuration. Images are built from Dockerfiles and stored in registries (Docker Hub, Amazon ECR, GitHub Container Registry).
Containers
A container is a running instance of an image. You can run multiple containers from the same image, each with its own state and isolated from each other. Containers are ephemeral -- when you stop and remove a container, any changes made inside it are lost (unless you use volumes).
Dockerfiles
A Dockerfile is a text file containing instructions for building an image. Each instruction creates a layer in the image. Docker caches layers, so rebuilding an image after a small change is fast.
Registries
A registry stores Docker images so they can be shared and deployed. Docker Hub is the default public registry. Most organisations also use private registries (AWS ECR, Google Artifact Registry) for their proprietary images.
Writing your first Dockerfile
Let us build a container for a simple Node.js application. Create a project directory with two files.
First, a minimal application (app.js):
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ status: 'healthy', message: 'Hello from Docker!' }));
});
server.listen(3000, () => {
console.log('Server running on port 3000');
});
Now the Dockerfile:
# Start from the official Node.js image
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm install --production
# Copy application code
COPY . .
# Tell Docker which port the app uses
EXPOSE 3000
# Command to run when the container starts
CMD ["node", "app.js"]
Each line in the Dockerfile is an instruction:
- FROM sets the base image.
node:20-alpinegives us Node.js 20 on a minimal Alpine Linux base. - WORKDIR sets the working directory for subsequent commands.
- COPY copies files from your machine into the image.
- RUN executes a command during the build process.
- EXPOSE documents which port the application uses (it does not publish the port -- that happens at run time).
- CMD specifies the default command when a container starts.
Building and running your container
Build the image
docker build -t my-app:1.0 .
The -t flag tags the image with a name and version. The . tells Docker to use the current directory as the build context (where to find the Dockerfile and files to copy).
Docker executes each instruction in the Dockerfile, creating layers. You will see output for each step. The first build downloads the base image and installs dependencies. Subsequent builds are faster because Docker caches unchanged layers.
Run a container
docker run -d -p 8080:3000 --name my-app my-app:1.0
Breaking this down:
-druns the container in the background (detached mode)-p 8080:3000maps port 8080 on your machine to port 3000 in the container--name my-appgives the container a readable namemy-app:1.0is the image to run
Your application is now running. Visit http://localhost:8080 and you will see the JSON response.
Managing containers
# List running containers
docker ps
# View container logs
docker logs my-app
# Follow logs in real time
docker logs -f my-app
# Execute a command inside a running container
docker exec -it my-app sh
# Stop a container
docker stop my-app
# Remove a stopped container
docker rm my-app
# List all images
docker images
# Remove an image
docker rmi my-app:1.0
The docker exec -it my-app sh command is particularly useful for debugging. It opens a shell inside the running container, letting you inspect files, check environment variables, and test network connectivity from inside the container's environment.
Docker Compose for multi-container applications
Real applications rarely run as a single container. A typical web application needs an application server, a database, and possibly a cache, a message queue, or a reverse proxy. Docker Compose lets you define and run multi-container applications with a single configuration file.
Create a docker-compose.yml file:
version: '3.8'
services:
app:
build: .
ports:
"8080:3000"
environment:
DATABASE_URL=postgres://user:password@db:5432/myapp
NODE_ENV=production
depends_on:
db
db:
image: postgres:16-alpine
environment:
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=myapp
volumes:
db-data:/var/lib/postgresql/data
ports:
"5432:5432"
volumes:
db-data:
This defines two services: your application and a PostgreSQL database. The depends_on directive ensures the database starts before the application. The volumes section creates persistent storage so database data survives container restarts.
Running with Docker Compose
# Start all services
docker compose up -d
# View logs for all services
docker compose logs
# View logs for a specific service
docker compose logs app
# Stop all services
docker compose down
# Stop and remove volumes (destroys database data)
docker compose down -v
Docker Compose transforms a complex multi-service setup into a single command. New team members can run docker compose up and have the entire application stack running in seconds, configured identically to everyone else.
Real-world Docker use cases
Local development environments
Instead of installing PostgreSQL, Redis, and Elasticsearch on your laptop, run them as containers. Your development environment matches production, and switching between projects with different requirements is instant.
CI/CD pipelines
CI/CD pipelines build Docker images, run tests inside containers, and push images to registries. The pipeline builds the same image that runs in production, eliminating environment differences as a source of bugs.
# GitHub Actions example
- name: Build Docker image
run: docker build -t my-app:${{ github.sha }} .
- name: Run tests
run: docker run my-app:${{ github.sha }} npm test
- name: Push to registry
run: docker push my-app:${{ github.sha }}
Microservices deployment
Each microservice runs in its own container with its own dependencies. The user service can use Python 3.11 while the payment service uses Node.js 20. Containers provide isolation without the overhead of separate virtual machines.
Consistent staging and production
The image you test in staging is the exact image you deploy to production. Not a similar environment -- the exact same binary artifact. This eliminates "it worked in staging" as a failure mode.
Common beginner mistakes
1. Running as root inside containers
By default, processes in Docker containers run as root. This is a security risk. Always create a non-root user in your Dockerfile:
# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Switch to that user
USER appuser
2. Using the latest tag in production
# Bad -- "latest" is ambiguous and changes over time
FROM node:latest
# Good -- pin to a specific version
FROM node:20-alpine
The latest tag can change without warning. A build that worked yesterday might break today because the base image updated. Pin your versions for reproducible builds.
3. Not using .dockerignore
Without a .dockerignore file, Docker copies everything in your project directory into the image -- including node_modules, .git, test files, and documentation. Create a .dockerignore:
node_modules
.git
.gitignore
README.md
docker-compose.yml
.env
tests/
This makes builds faster and images smaller.
4. Installing unnecessary dependencies
Every package you install increases image size and attack surface. Only install what your application needs to run. Use --production or --omit=dev flags to exclude development dependencies.
5. Not leveraging layer caching
Docker caches each layer. If a layer has not changed, Docker reuses the cached version. Order your Dockerfile instructions from least-frequently-changed to most-frequently-changed:
# Good order -- dependencies change less often than code
COPY package*.json ./
RUN npm install
COPY . .
# Bad order -- code change invalidates the npm install cache
COPY . .
RUN npm install
With the good order, changing your application code does not trigger a full npm install. Docker uses the cached dependency layer and only rebuilds the final copy step.
Where Docker fits in the DevOps toolchain
Docker does not exist in isolation. It is one layer in a larger stack:
- Linux -- containers run Linux processes
- Git -- Dockerfiles and compose files are version-controlled
- Docker -- packages applications into containers
- CI/CD -- builds images, runs tests, pushes to registries
- Kubernetes -- orchestrates containers across clusters at scale
- Terraform -- provisions the infrastructure containers run on
Docker is the bridge between writing code and running it reliably in production. It is the reason developers and operations teams can speak the same language: the application runs in a container, the container runs the same everywhere, and the image is the single source of truth.
For a deeper comparison of Docker and Kubernetes and when to use each, see our guide on Docker vs Kubernetes.
What to do next
- Install Docker Desktop on your machine (available for macOS, Windows, and Linux)
- Build the example above -- write the Dockerfile, build the image, run the container
- Containerise one of your own projects -- pick an existing application and write a Dockerfile for it
- Add a database with Docker Compose -- connect your application to a PostgreSQL container
- Push an image to Docker Hub -- create a free account and publish your first image
The best way to learn Docker is to use it for your actual development work. Replace locally installed databases and services with containers. Write Dockerfiles for your projects. Break things and debug them with docker logs and docker exec. That hands-on experience is what makes the knowledge stick.
Frequently Asked Questions
Ola
Founder, CloudPros
Building the most hands-on DevOps bootcamp for the AI era. 16 weeks of real infrastructure, real projects, real career outcomes.
