The first section will provide an overview and brief description of each job in the pipeline.
hadolint: This job is part of the ‘Quality’ stage.
It uses the hadolint
Docker image to analyze the Dockerfile and ensure that it adheres to
best practices.
chatgpt_analysis: Job in the ‘Docs’ stage. It is
responsible for analysis of the complete pipeline using Chat GPT model
and generates a markdown file as a result.
docker-hub-build-arm: This job is part of the
‘Docker-hub-build’ stage. It is used to build docker images for ARM
architecture hardware.
docker-hub-test-arm: Job in the ‘Docker-hub-test’
stage. It tests the docker image built specifically for ARM
hardware.
push-docker-hub-arm: Job in the ‘Docker-hub-pushtag’
stage. It is used to tag and push the docker image built for ARM
hardware.
docker-hub-build: Part of the ‘Docker-hub-build’
stage. It builds docker images for AMD64 architecture hardware.
docker-hub-test: Job in the ‘Docker-hub-test’ stage.
It conducts tests on the docker image built for AMD64 hardware.
push-docker-hub: Job in the ‘Docker-hub-pushtag’
stage. It tags and pushes the docker image built for AMD64
hardware.
getsquid_vars: This job is part of the ‘Get-version’
stage. It fetches squid version information and sets environment
variables.
update_dockerhub_readme: Stage ‘Docs’. This updates
the README file in DockerHub repository.
In the following sections, we will provide a detailed explanation of the purpose of each job in the pipeline.
This job is focused on maintaining high code quality and adhering to best practices in the Dockerfile.
before_script:
- cd $CI_PROJECT_DIR
script:
- hadolint --ignore DL3008 Dockerfile
The job is set to start with cd $CI_PROJECT_DIR. It
changes the current directory to the project’s root directory, ensuring
that the hadolint command is executed from the correct
location.
The command hadolint --ignore DL3008 Dockerfile runs the
Hadolint tool, which checks and validates Dockerfile against a set of
recommended best practices. DL3008 warning rule is
ignored.
‘ChatGPT_analysis’ runs the analysis using the OpenAI’s GPT-4 model. It sends a bunch of details related to the jobs to the GPT-4 model and stores the response in a markdown file. Following are the key steps:
before_script:
- apt update && apt install curl git jq ca-certificates pandoc openssh-client -y --no-upgrade --no-install-recommends --no-install-suggests
- source variables.env
- SQUID_VERSION=squid-$SQUID_VERSION
script:
- JOBS_CONTENT=$(cat .gitlab-ci.yml gitlabci/*)
- LAST_COMMIT=$(git log -1 --pretty=format:"%h %s%n%b")
- JSON_CONTENT=$(jq -n --arg model "gpt-4" --arg content "$CONTENT" '{model:$model, messages:[{role:"user", content:$content}] }')
- RESPONSE=$(curl -X POST https://api.openai.com/v1/chat/completions -H "Authorization:Bearer $CHATGPT_API_KEY" -H "Content-Type:application/json" -d "$JSON_CONTENT")
- ANSWER=$(echo $RESPONSE | jq 'del(.choices[0].message.content)')
- echo "$ANSWER"
- echo -e "$RESPONSE" > chatgpt_analysis_$(date +%Y%m%d).md
In the before_script section: - dependencies such as
curl, git, jq etc. are installed, - environment variables from
variables.env file are sourced, - and the
SQUID_VERSION is set.
In the script section it: - fetches the content of the
gitlabci jobs, - gets the details of the last commit of the repo, -
generates a properly formatted JSON to post to the OpenAI API, - sends a
POST request to the OpenAI’s
/v1/chat/completions, - processes and stores the response
from the OpenAI’s API, - and writes the response into a markdown file
named chatgpt_analysis_YYYYMMDD.md.
This job builds a Docker image specifically for ARM architecture hardware.
before_script:
- docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_TOKEN" $DOCKER_HUB_REGISTRY
script:
- docker build -f Dockerfile --build-arg SQUID_VERSION=$SQUID_VERSION --pull -t $CONTAINER_BUILD_NOPROD_NAME_ARM .
- docker push $CONTAINER_BUILD_NOPROD_NAME_ARM
In before_script, it logs in to Docker registry using
provided credentials from environment variables.
In script, it builds the Docker image using the
Dockerfile in the project and tags it with the ARM build tag. Then it
pushes the image to Docker registry.
This job is all about testing the docker image built specifically for ARM hardware architecture. It verifies if the Docker image is working fine and the build was successful.
before_script:
- apt update && apt install -y curl --no-upgrade --no-install-recommends --no-install-suggests
script:
- export https_proxy=http://$CONTAINER_TEST_NAME:3128 && curl -k https://www.google.fr
In the before_script section, it installs
curl.
In the script section, it sets ‘https_proxy’ environment
variable and sends a test curl command to google.fr to
ensure that the Docker image works as expected.
This job’s purpose is to tag and push Docker images.
before_script:
- docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_TOKEN" $DOCKER_HUB_REGISTRY
script:
- docker pull $CONTAINER_BUILD_NOPROD_NAME_ARM
- docker tag $CONTAINER_BUILD_NOPROD_NAME_ARM $HUB_REGISTRY_IMAGE:$SQUID_VERSION-arm
- docker push $HUB_REGISTRY_IMAGE:$SQUID_VERSION-arm
- docker tag $CONTAINER_BUILD_NOPROD_NAME_ARM $HUB_REGISTRY_IMAGE:latest-arm
- docker push $HUB_REGISTRY_IMAGE:latest-arm
In before_script, it logs in to the Docker registry.
In script, - Pulls the Docker image built for ARM
architecture. - Tags the Docker image with the version number. - Pushes
the Docker image to the Docker registry. - Tags the Docker image as
‘latest’. - Pushes the Docker image marked as ‘latest’ to the Docker
registry.
Similar steps are repeated for docker-hub-build,
docker-hub-test and push-docker-hub jobs but
for AMD64 hardware architecture.
This job’s purpose is to fetch the latest Squid version and store it in an environment variable.
script:
- apt update && apt install git curl ca-certificates -y --no-upgrade --no-install-recommends --no-install-suggests
- export SQUID_VERSION=$(curl -LsXGET https://github.com/squid-cache/squid/releases/latest | grep -m 1 "Release" | cut -d " " -f4 |tr -d 'v')
- echo "SQUID_VERSION=$SQUID_VERSION" > variables.env
- echo $SQUID_VERSION
- sed -i "s/{{SQUID_VERSION}}/$SQUID_VERSION/g" README_template.md
- sed -i "s/{{DATE}}/$(date +%Y%m%d)/g" README_template.md
- cp README_template.md README.md
- git config user.email "fredbcode"
- git config user.name "fredbcode"
- git add README.md
- git commit -m "README Auto update [skip ci]" || true
- git push https://$GITLAB_TOKEN@gitlab.com/fredbcode-images/squid.git HEAD:master || true
This script installs necessary tools (like curl and git) then fetches the squid version and stores it in ‘SQUID_VERSION’, and saves it into ‘variables.env’. It auto-populates the placeholders in README template and replaces the README file with the updated README template, and finally, commits and pushes the changes to git repository.
This job’s purpose is to update the README file in DockerHub repository.
script:
- README_CONTENT=$(cat README.md)
- PAYLOAD=$(jq -n --arg desc "$README_CONTENT" '{"full_description":$desc}')
- TOKEN=$(curl -v -s -X POST -H "Content-Type:application/json" -d '{"username":"'"$DOCKER_HUB_USER"'","password":"'"$DOCKER_HUB_PASSWORD"'"}' https://hub.docker.com/v2/users/login/ | jq -r .token)
- curl -X PATCH -H "Authorization:JWT $TOKEN" -H "Content-Type:application/json" -d "$PAYLOAD" https://hub.docker.com/v2/repositories/$HUB_REGISTRY_IMAGE
This script reads the contents of README.md, packs it into JSON payload, logs in to Docker Hub to retrieve an authentication token, and then sends a PATCH request to update the README file in DockerHub.
Each job may reference several parameters, environment variables, and files throughout its execution. Classifying the most commonly used ones and their purpose:
$CI_PROJECT_DIR: This GITLAB predefined variable points
to the root directory of the Gitlab project.$DOCKER_HUB_USER and $DOCKER_HUB_TOKEN or
DOCKER_HUB_PASSWORD: These environment variables are used
to log in to DockerHub. They are usually stored as secrets in the GitLab
settings.$HUB_REGISTRY_IMAGE and
CONTAINER_BUILD_NOPROD_NAME_ARM or
CONTAINER_BUILD_NOPROD_NAME_AMD64 : These are used to
reference docker image names.SQUID_VERSION : This is an environment variable which
holds squid version set by getsquid_vars job.Jobs dependencies are basically set by needs keyword.
This dependency mechanism allows to run the jobs as soon as their
dependencies finish, helping to minimize pipeline execution time.
For example docker-hub-test-arm is dependent on
docker-hub-build-arm and getsquid_vars(and so
on for the rest of the jobs), meaning it can only run after these jobs
have completed successfully.
Each job produces artifacts that can be either files created during the job or the console output that the job produced.
For instance, chatgpt_analysis job generates a markdown
file called chatgpt_analysis_YYYYMMDD.md and html
chatgpt_analysis_YYYYMMDD.html.
The created Docker images are uploaded into DockerHub by jobs like
docker-hub-build-arm, docker-hub-build,
push-docker-hub-arm, and push-docker-hub.
getsquid_vars create an environment file
variables.env which is later passed on to jobs that need
it.
The latest commit 9790c65 README Auto update [skip ci]
is actually part of the getsquid_vars job. This commit
updates the README file with latest Squid version number and date, and
commits and pushes the changes to the repository.
README Auto update [skip ci] indicates that this commit is
meant to update the README file and [skip ci] keyword is
used to prevent Gitlab-CI from starting a new pipeline for this commit.
This is especially useful for commits that are only making changes to
documentation and you don’t want to trigger CI/CD pipeline
unnecessarily.