DevOps
CI/CD Pipeline Tutorial for Absolute Beginners
A CI/CD pipeline automatically builds, tests, and deploys your code every time you push changes. Instead of manually running tests, building artefacts, and copying files to servers, you define the process once in a configuration file, and it runs identically every time. Push code, the pipeline handles the rest.
This tutorial walks you through CI/CD from first principles. By the end, you will have a working pipeline that builds and tests a Node.js application using GitHub Actions -- and you will understand every line of the configuration.
What CI/CD actually means
CI/CD is two practices that work together:
CI -- Continuous Integration is the practice of frequently merging code changes into a shared repository, with each merge triggering an automated build and test. "Continuous" means every push, not once a week. "Integration" means combining your changes with everyone else's code and verifying nothing breaks.
CD -- Continuous Delivery or Continuous Deployment is the practice of automatically preparing code for release (Delivery) or automatically deploying it to production (Deployment).
| Term | What it means | What triggers it |
|---|---|---|
| Continuous Integration (CI) | Code is built and tested automatically | Every push or pull request |
| Continuous Delivery (CD) | Code is built, tested, and ready to deploy | Every merge to main; human approves deployment |
| Continuous Deployment (CD) | Code is built, tested, and deployed automatically | Every merge to main; no human required |
The distinction between Delivery and Deployment matters. Most teams start with Continuous Delivery -- automatic testing with manual deployment approval -- and evolve to Continuous Deployment as their test coverage and confidence grow.
Why CI/CD matters
Without CI/CD, the deployment process looks like this:
- Developer finishes a feature
- Developer manually runs tests (or forgets to)
- Developer builds the application locally
- Developer copies files to the server (via FTP, SCP, or SSH)
- Developer restarts the application
- Developer checks if it works
- If something breaks, developer scrambles to figure out which change caused it
Every step is manual, error-prone, and time-consuming. With CI/CD:
- Developer pushes code to Git
- Everything else happens automatically
The concrete benefits:
- Bugs caught earlier -- automated tests run on every push, catching issues before they reach production
- Faster releases -- deployments that took hours of manual work happen in minutes
- Consistent process -- the pipeline runs the same way every time, eliminating human error
- Confidence to deploy -- when every change is tested automatically, you can deploy multiple times per day without fear
- Collaboration -- everyone's code is integrated continuously, preventing "merge hell" where changes conflict after weeks of isolation
Teams with mature CI/CD pipelines deploy 200 times more frequently than teams without, with 24 times faster recovery from failures (DORA metrics). This is why CI/CD is a core DevOps practice and a requirement in virtually every DevOps job description.
The four stages of a CI/CD pipeline
Every pipeline follows the same fundamental structure, regardless of the tool:
Stage 1: Source
The pipeline triggers when code changes are detected. This is usually a push to a Git branch or the creation of a pull request.
Developer pushes code → Git repository detects the change → Pipeline starts
Stage 2: Build
The pipeline compiles the code, installs dependencies, and creates a deployable artefact. For a Node.js application, this means running npm install and npm run build. For a Go application, go build. For a Docker-based workflow, docker build.
Install dependencies → Compile/build → Create artefact
Stage 3: Test
The pipeline runs automated tests against the built application. This typically includes unit tests (testing individual functions), integration tests (testing components together), and sometimes end-to-end tests (testing the full user workflow).
Unit tests → Integration tests → (Optional) E2E tests
If any test fails, the pipeline stops. The code does not proceed to deployment. The developer is notified and must fix the issue.
Stage 4: Deploy
If all tests pass, the pipeline deploys the application. This could mean pushing a Docker image to a registry, uploading files to a server, or triggering a Kubernetes deployment.
Tests pass → Deploy to staging → (Optional) Manual approval → Deploy to production
The deploy stage often has multiple environments: staging (for final verification) and production (for real users). Many teams require manual approval before production deployment.
Building your first pipeline with GitHub Actions
GitHub Actions is the easiest CI/CD tool to start with. It is free for public repositories, requires no server setup, and is configured with a single YAML file in your repository.
Prerequisites
- A GitHub account
- A repository with a Node.js application (or create a simple one)
- Basic understanding of Git (push, pull, branches)
Step 1: Understand the file structure
GitHub Actions workflows live in the .github/workflows/ directory of your repository. Each YAML file in this directory is a separate workflow.
your-repo/
├── .github/
│ └── workflows/
│ └── ci.yml ← Your pipeline configuration
├── src/
│ └── index.js
├── tests/
│ └── index.test.js
├── package.json
└── README.md
Step 2: Create a simple Node.js application
If you do not already have a project, create one:
mkdir my-cicd-project && cd my-cicd-project
npm init -y
Create a simple application file (src/index.js):
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
module.exports = { add, multiply };
Create a test file (tests/index.test.js):
const { add, multiply } = require('../src/index');
test('adds two numbers', () => {
expect(add(2, 3)).toBe(5);
});
test('multiplies two numbers', () => {
expect(multiply(4, 5)).toBe(20);
});
Install Jest as a testing framework:
npm install --save-dev jest
Add a test script to package.json:
{
"scripts": {
"test": "jest"
}
}
Step 3: Write the pipeline configuration
Create the file .github/workflows/ci.yml:
name: CI Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
name: Checkout code
uses: actions/checkout@v4
name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
name: Install dependencies
run: npm ci
name: Run linter
run: npm run lint --if-present
name: Run tests
run: npm test
name: Build application
run: npm run build --if-present
Step 4: Understand every line
Let us break down this configuration:
name: CI Pipeline -- A human-readable name for the workflow. This appears in the GitHub Actions tab.
on: -- Defines when the pipeline runs. This pipeline triggers on pushes to main and on pull requests targeting main.
on:
push:
branches: [main] # Runs when code is pushed to main
pull_request:
branches: [main] # Runs when a PR is opened against main
jobs: -- A workflow contains one or more jobs. Each job runs on a fresh virtual machine.
runs-on: ubuntu-latest -- The job runs on a fresh Ubuntu server provided by GitHub. This server exists only for the duration of the pipeline run.
steps: -- The sequence of commands the job executes:
- Checkout code -- Clones your repository onto the runner. Without this, the runner is an empty machine.
- Set up Node.js -- Installs Node.js 20 and configures npm caching for faster subsequent runs.
- Install dependencies --
npm ciinstalls packages frompackage-lock.jsonexactly as specified (more reliable thannpm installin CI). - Run linter -- Runs your linter if one is configured.
--if-presentmeans it does not fail if no lint script exists. - Run tests -- Executes your test suite. If any test fails, the pipeline stops here.
- Build application -- Builds the application if a build script exists.
Step 5: Push and watch it run
git add .
git commit -m "Add CI pipeline"
git push origin main
Go to your repository on GitHub, click the "Actions" tab, and you will see your pipeline running. Each step executes in order, with green checkmarks for success or red crosses for failure.
That is your first CI/CD pipeline. Every future push to main or pull request will trigger this pipeline automatically.
Adding deployment to the pipeline
The pipeline above covers CI (build and test). Let us add CD (deploy). This example deploys to a staging server when code is pushed to main:
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
name: Checkout code
uses: actions/checkout@v4
name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
name: Install dependencies
run: npm ci
name: Run tests
run: npm test
name: Build application
run: npm run build --if-present
deploy-staging:
needs: build-and-test
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
name: Checkout code
uses: actions/checkout@v4
name: Deploy to staging
run: |
echo "Deploying to staging server..."
# In a real pipeline, this would be:
# rsync -avz ./build/ user@staging-server:/var/www/html/
# or: docker push myregistry/myapp:latest
# or: kubectl apply -f k8s/
Key additions:
needs: build-and-test-- The deploy job only runs after build-and-test succeeds. If tests fail, deployment never happens.if: github.event_name == 'push' && github.ref == 'refs/heads/main'-- Deploy only on pushes tomain, not on pull requests. Pull requests get tested but not deployed.
Common mistakes beginners make
1. Not running tests in CI
The most common mistake is creating a pipeline that builds but does not test. A pipeline without tests is a conveyor belt that ships broken code faster. Always include a test step, even if your test suite is small.
2. Using npm install instead of npm ci
npm install can update package-lock.json, leading to inconsistent builds. npm ci installs exact versions from the lock file, ensuring the CI environment matches what you tested locally. Always use npm ci in pipelines.
3. Hardcoding secrets in the workflow file
Never put API keys, passwords, or tokens directly in your YAML file. Use GitHub's encrypted secrets:
# Bad -- secret visible in the file
- run: curl -H "Authorization: Bearer sk-12345" https://api.example.com
# Good -- secret stored in GitHub Settings > Secrets
- run: curl -H "Authorization: Bearer ${{ secrets.API_TOKEN }}" https://api.example.com
4. Running everything in a single step
Break your pipeline into discrete steps. If a single step contains npm install && npm test && npm run build && deploy, you cannot tell which part failed when it breaks. Separate steps give you clear failure messages.
5. Not caching dependencies
Installing dependencies from scratch on every run wastes time. Use caching:
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' # Caches node_modules between runs
This can reduce pipeline runtime from 3 minutes to 30 seconds for projects with many dependencies.
6. Ignoring pipeline failures
A failing pipeline is a red alert, not background noise. If the team gets used to ignoring pipeline failures ("oh it is always failing, just merge it"), you lose the entire benefit of CI/CD. Fix failures immediately or the pipeline becomes useless.
Beyond your first pipeline: next steps
Once your basic pipeline works, here are the patterns to learn next:
Matrix builds
Test across multiple Node.js versions simultaneously:
strategy:
matrix:
node-version: [18, 20, 22]
steps:
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
This runs your tests on Node 18, 20, and 22 in parallel, ensuring compatibility.
Docker-based pipelines
Build and push Docker images as part of your pipeline:
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
- name: Push to registry
run: |
echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
docker push myapp:${{ github.sha }}
Environment-based deployments
Deploy to staging automatically, then require manual approval for production:
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production # Requires manual approval in GitHub Settings
steps:
name: Deploy to production
run: echo "Deploying to production..."
Kubernetes deployments
Update a Kubernetes deployment with the new image:
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp myapp=myregistry/myapp:${{ github.sha }}
kubectl rollout status deployment/myapp
Where CI/CD fits in the DevOps toolchain
CI/CD is the connective tissue of the DevOps toolchain. It ties together every other tool:
- Git -- triggers the pipeline
- Linux commands -- execute within pipeline steps
- Docker -- builds container images in the pipeline
- Terraform -- provisions infrastructure as a pipeline step
- Kubernetes -- receives deployments from the pipeline
- Monitoring -- verifies deployments after the pipeline completes
Without CI/CD, all these tools are disconnected manual processes. With CI/CD, they form an automated workflow from code commit to production deployment. The full DevOps tools guide maps out how each tool connects.
Most teams' CI/CD maturity evolves in stages:
- Stage 1 -- Manual everything (where most people start)
- Stage 2 -- Automated testing on push (basic CI)
- Stage 3 -- Automated deployment to staging (basic CD)
- Stage 4 -- Automated production deployment with approval gates
- Stage 5 -- Full Continuous Deployment with feature flags and canary releases
You do not need to reach stage 5 on day one. Start with stage 2 -- automated testing on every push -- and evolve from there. Even that single improvement will change how you work.
Frequently Asked Questions
Ola
Founder, CloudPros
Building the most hands-on DevOps bootcamp for the AI era. 16 weeks of real infrastructure, real projects, real career outcomes.
