8 min read

Node.js Container Build and Deploy with Jenkins, Helm, Private Docker Registry and Kubernetes

A guide to set up a CICD pipeline for a containerized Node.js Application using Jenkins, Helm, Private Docker Repository, and Kubernetes.
Node.js Container Build and Deploy with Jenkins, Helm, Private Docker Registry and Kubernetes

Introduction

Hey everyone! This guide is going to walk you through setting up a Jenkins Blue Ocean Pipeline to build a Node.js Application, push it to a private repository, and deploy it to Kubernetes using a basic Helm chart.

What is currently set up?

  • Ubuntu 20.04.2 2 vCPU, 4 GB RAM, 50 GB storage — this is the recommended config for Jenkins
  • MicroK8s Cluster — Refer my Home Lab Infrastructure post here for more details.
  • Docker Private Repository — I am using Sonatype Nexus 3, you can use any Docker registry.

What do I want?

  • Jenkins Blue Ocean Pipeline for Building, Publishing and Deploying a Node.js Application
  • Dockerfile for application container image
  • Helm Chart for deploying to Kubernetes
  • Docker and Jenkins — definitely need these.

Docker and Jenkins

Installing Docker Engine and kubectl

These commands will check for and uninstall older versions of docker, they will then replace it with the latest version of Docker-CE for Ubuntu.

sudo apt-get remove docker docker-engine docker.io containerd runc

sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get install -y docker-ce docker-ce-cli containerd.io

sudo usermod -aG docker <username>

Install kubectl with

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo apt-get update

sudo apt-get install -y kubectl

Installing Jenkins and Java

Install Java 11 runtime for Jenkins

sudo apt-get install openjdk-11-jdk

Install the LTS version of Jenkins using the following commands

wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > \
    /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

Start Jenkins

sudo systemctl start jenkins

Enable Jenkins to start on System boot

sudo systemctl enable jenkins

Add Jenkins user to the docker group so that it can use docker

sudo usermod -aG docker jenkins

You may have to reboot for these changes to take effect.

Next, Visit the Jenkins GUI in your browser (port 8080 by default) and give it the initial password, you can then set up Jenkins using the Install Suggested Plugins option to quickly get started.

Installing Blue Ocean, Docker and Kubernetes Plugins

  • Go to Manage Jenkins —> Manage Plugins, select the Available Tab
  • Check Blue Ocean, Docker Pipeline, Kubernetes and CloudBees Docker Build and Publish, then Download now and install after restart
  • Restart Jenkins after the plugins have downloaded.

Dockerfile for application container image

Let’s create the Dockerfile for our Node.js application, in this case, it is an API server using Express.js written in TypeScript.

Create a file named Dockerfile in the application directory and add the following to it

FROM node:14

# Create app directory
WORKDIR /usr/src/app

COPY package.json ./
COPY yarn.lock ./

# Install node-modules
RUN yarn install --frozen-lockfile

COPY dist/ ./

EXPOSE 3000

CMD ["node", "server.js"]

Here we are basing the container off the node-14 image.

Then we create the app directory and copy over the package.json and yarn.lock files.

NOTE: If you are not using yarn, you would be copying over package.json and package-lock.json.

We then install the dependencies using either yarn install or npm install based on what package manager you use.

Since this application is written in Typescript and compiled, my compiled Node.js application is in the dist/ folder. You will have to copy over your application code into the working directory.

Next since this application was intended to run on port 3000, we expose the port on the container. Make sure you substitute this for the port number that your application runs on.

Lastly, we add the command that the container uses to run the application, in this case it is node server.js

Jenkinsfile for Blue Ocean Pipeline

Let’s write the Jenkinsfile for our Pipeline.

Here is the final Jenkinsfile for everyone who wants to copy-paste and move on with life, for everyone else, there is a step-by-step breakdown of the pipeline after.

pipeline {
  agent any
  stages {
    stage('Build') {
      agent {
        docker {
          image 'node:14-buster'
        }
      }
      steps {
        sh 'yarn install --frozen-lockfile'
        sh 'yarn run build'
        sh 'tar -cvf builtSources.tar ./dist/'
        stash(name: 'dist-files', includes: 'builtSources.tar', useDefaultExcludes: true)
      }
    }

    stage('Publish') {
      environment {
        registryCredential = 'docker-repo-jenkinsci'
      }
      steps {
        unstash 'dist-files'
        sh 'tar -xvf builtSources.tar'
        script {
          commitId = sh(returnStdout: true, script: 'git rev-parse --short HEAD')
          def appimage = docker.build imageName + ":" + commitId.trim()
          docker.withRegistry( 'https://docker.tansanrao.com', registryCredential ) {
            appimage.push()
            if (env.BRANCH_NAME == 'main' || env.BRANCH_NAME == 'release') {
              appimage.push('latest')
              if (env.BRANCH_NAME == 'release') {
                appimage.push("release-" + "${COMMIT_SHA}")
              }
            }
          }
        }
      }
    }

    stage('Deploy Dev') {
      when {
        branch 'main'
      }
      environment {
        registryCredential = 'docker-repo-jenkinsci'
      }
      steps {
        script{
          commitId = sh(returnStdout: true, script: 'git rev-parse --short HEAD')
          commitId = commitId.trim()
          withKubeConfig(credentialsId: 'kubeconfig') {
            withCredentials(bindings: [usernamePassword(credentialsId: registryCredential, usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD')]) {
              sh 'kubectl delete secret regcred --namespace=example-dev --ignore-not-found'
              sh 'kubectl create secret docker-registry regcred --namespace=example-dev --docker-server=https://docker.tansanrao.com --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD [email protected]'
            }
            sh "helm upgrade --set image.tag=${commitId} --install --wait dev-example-service ./chart --namespace example-dev"
          }
        }
      }
    }

    stage('Deploy Prod') {
      when {
        branch 'release'
      }
      environment {
        registryCredential = 'docker-repo-jenkinsci'
      }
      steps {
        script {
          commitId = sh(returnStdout: true, script: 'git rev-parse --short HEAD')
          commitId = commitId.trim()
          echo commitId
          withKubeConfig(credentialsId: 'kubeconfig') {
            withCredentials(bindings: [usernamePassword(credentialsId: registryCredential, usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD')]) {
                sh 'kubectl delete secret regcred --namespace=example-prod --ignore-not-found'
                sh 'kubectl create secret docker-registry regcred --namespace=example-prod --docker-server=https://docker.tansanrao.com --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD [email protected]'
            }
            sh "helm upgrade --set image.tag=${commitId} --install --wait prod-example-service ./chart --namespace example-prod"
          }
        }
      }
    }
  }
  environment {
    imageName = 'example-service'
  }
}

Okay, that’s a lot, let’s break it down.

Pipeline

pipeline {
  agent any
  stages {
    ...
  }
  environment {
    ...
  }
}

The pipeline is the root element of our script, it contains the global agent in which the pipeline can execute, the stages that are part of the pipeline, and the common environment variables that are shared between all stages.

Stages

stages {
  stage('Build') {
	...
  }

  stage('Publish') {
	...
  }

  stage('Deploy Dev') {
	...
  }

  stage('Deploy Prod') {
	...
  }
}

This pipeline has 4 stages,

  • Build
  • Publish
  • Deploy Dev
  • Deploy Prod

The Build stage runs the actual build process and test cases if any. The Publish stage runs the actual packaging of application code into a Docker Image and pushing it to the registry. The Deploy Dev and the Deploy Prod branches are conditional, they deploy the application to Kubernetes using Helm and will execute based on the branch currently being built.

Build stage
    stage('Build') {
      agent {
        docker {
          image 'node:14-buster'
        }
      }
      steps {
        sh 'yarn install --frozen-lockfile'
        sh 'yarn run build'
        sh 'tar -cvf builtSources.tar ./dist/'
        stash(name: 'dist-files', includes: 'builtSources.tar', useDefaultExcludes: true)
      }
    }

In this stage, we specify the agent, in this case, the node:14-buster docker container. This tells Jenkins that all the steps in this stage are to be executed inside the specified docker container. Then we list the actual steps needed to install node dependencies and build the application. We then create a tarball of the built sources and use stash to send it back to Jenkins for safe keeping. This lets us persist the compiled code across stages running in different agents.

Publish Stage
    stage('Publish') {
      environment {
        registryCredential = 'docker-repo-jenkinsci'
      }
      steps {
        unstash 'dist-files'
        sh 'tar -xvf builtSources.tar'
        script {
          commitId = sh(returnStdout: true, script: 'git rev-parse --short HEAD')
          def appimage = docker.build imageName + ":" + commitId.trim()
          docker.withRegistry( 'https://docker.tansanrao.com', registryCredential ) {
            appimage.push()
            if (env.BRANCH_NAME == 'main' || env.BRANCH_NAME == 'release') {
              appimage.push('latest')
              if (env.BRANCH_NAME == 'release') {
                appimage.push("release-" + "${COMMIT_SHA}")
              }
            }
          }
        }
      }
    }

Here, we set an environment variable that contains the name of the Jenkins Credential that contains our Docker Registry credentials. Then, we have the actual build steps, we unstash the sources and extract them. We then use a script step to fetch the short commitId of the git repo HEAD, we use docker.build to build the image, we use docker.withRegistry() to push the image with any required tags. In my case, I tag the image with latest if it was on the main or release branch. And I also tag it with a release tag when the build is for the release branch.

Deploy Stage
    stage('Deploy Dev') {
      when {
        branch 'main'
      }
      environment {
        registryCredential = 'docker-repo-jenkinsci'
      }
      steps {
        script{
          commitId = sh(returnStdout: true, script: 'git rev-parse --short HEAD')
          commitId = commitId.trim()
          withKubeConfig(credentialsId: 'kubeconfig') {
            withCredentials(bindings: [usernamePassword(credentialsId: registryCredential, usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD')]) {
              sh 'kubectl delete secret regcred --namespace=example-dev --ignore-not-found'
              sh 'kubectl create secret docker-registry regcred --namespace=example-dev --docker-server=https://docker.tansanrao.com --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD [email protected]'
            }
            sh "helm upgrade --set image.tag=${commitId} --install --wait dev-example-service ./chart --namespace example-dev"
          }
        }
      }
    }

Deploy Dev and Deploy Prod essentially have the same steps. These stages are conditional, Deploy Dev shown above, executes only when the branch is main. Deploy Prod executes only when the branch is release. In this stage, we use withKubeConfig to provide the Kubeconfig from a Jenkins Credential. We use withCredentials to fetch the Docker Registry credentials and make them available as variables. We then use kubectl to delete the secret containing credentials if it already exists. Furthermore, we add the registry-credentials to a Kubernetes secret so that the deployment can use it as an imagePullSecret (more on this in a bit). We then run helm upgrade to upgrade the application release or create a new one if it’s the first time.

Helm Chart

Now that our Jenkinsfile is done, we need to create the Helm Chart for the pipeline to deploy.

Make sure you have Helm installed, if you don’t, you can do so using the snap package for ubuntu

sudo snap install helm --classic

Let’s create a chart based on the helm starter chart, make sure you run the following stuff in your application directory.

helm create chart

You will now have a chart directory containing all the files for your Helm chart.

Edit Chart.yaml and update the chart name, app version and chart version.

Edit values.yaml to add the proper image path and add image pull secrets if private repository

image:
  repository: docker.tansanrao.com/example-service
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "latest"

imagePullSecrets:
  - name: regcred

Create a Service Account and set a name

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "example-service-sa"

Enable autoscaling if required, in this case, I am leaving it off by default.

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

Edit the templates/deployment.yaml in the chart directory to change the port, and liveness/readiness endpoints under the containers section of the yaml file. My liveness and readiness endpoint is at /healthz which is why I will be using that, if you are not certain, leave it as /.

          ports:
            - name: http
              containerPort: 3000
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: http
          readinessProbe:
            httpGet:
              path: /healthz
              port: http

Once the Dockerfile, Jenkinsfile and Helm file are ready, push the changes to GitHub.

Add Secrets to Jenkins

  • From your dashboard, Go to Manage Jenkins —> Manage Credentials
  • Under Stores scoped to Jenkins, select Jenkins
  • Select Global Credentials (unrestricted)
  • On the left, select Add Credentials
  • Create a credential of type Username with password and add your docker username and password, for the ID, I used ’docker-repo-jenkinsci’
Docker Registry Credentials
Docker Registry Credentials
  • Next Create another credential of type Secret file
Kubeconfig Credential
Kubeconfig Credential

Connecting Blue Ocean to GitHub

  • Click on Open Blue Ocean in the sidebar of the dashboard.
  • Create a new Pipeline
  • Create a new GitHub Access Token and paste it in.
Create New Pipeline
Create New Pipeline
  • Choose your repository and run the pipeline.

Done! You have successfully set up a pipeline to build, publish and deploy your code based on the VCS branch.