In the list of stages defined in the .gitlab-ci.yml
file, we have 10 jobs from ‘Quality’ to ‘Docs’. Here is a brief
description of the jobs:
Quality: This job’s responsibility is to assure the quality of the Dockerfile by using hadolint, a Dockerfile linter.
Get-version: This job fetches the latest version of Squid from the GitHub releases.
Docker-Hub-build (ARM & AMD64): These jobs build the Docker images for the respective CPU architectures, tag them, and push them into Docker Hub.
Docker-Hub-test (ARM & AMD64): These jobs run tests in the built images, ensuring the images work as expected.
Docker-Hub-pushtag (ARM & AMD64): These jobs push the respective Docker images to Docker Hub with the latest and version-specific tags.
Test: Performs a suite of tests in the stages that validates the Docker images.
Docs: In the final stage, the ChatGPT analysis takes place and the project’s README is updated on Docker Hub.
In this stage, hadolint, a popular Dockerfile linter is used to ensure the Dockerfile conforms to best practices and is free of syntax errors.
hadolint:
image: hadolint/hadolint:latest-debian
stage: Quality
before_script:
- cd $CI_PROJECT_DIR
script:
- hadolint --ignore DL3008 Dockerfile image: hadolint/hadolint:latest-debian: This command
pulls the latest Debian version of the hadolint Docker
image.cd $CI_PROJECT_DIR: Change directory to the project
directory.hadolint --ignore DL3008 Dockerfile: Run hadolint to
lint the Dockerfile, ignoring rule DL3008.This stage fetches the latest version of Squid from GitHub releases, updates the README file, and commits the changes to the repository.
getsquid_vars:
stage: Get-version
image:
name: $CONTAINER_CLIENT_IMAGE
artifacts:
expire_in: 1 hour
paths:
- variables.env
script:
- apt update && apt install git curl ca-certificates -y --no-upgrade --no-install-recommends --no-install-suggests
- export SQUID_VERSION=$(curl -LsXGET https://github.com/squid-cache/squid/releases/latest | grep -m 1 "Release" | cut -d " " -f4 |tr -d 'v')
- echo "SQUID_VERSION=$SQUID_VERSION" > variables.env
- echo $SQUID_VERSION
- sed -i "s/{{SQUID_VERSION}}/$SQUID_VERSION/g" README_template.md
- sed -i "s/{{DATE}}/$(date +%Y%m%d)/g" README_template.md
- cp README_template.md README.md
- git config user.email "fredbcode"
- git config user.name "fredbcode"
- git add README.md
- git commit -m "README Auto update [skip ci]" || true
- git push https://$GITLAB_TOKEN@gitlab.com/fredbcode-images/squid.git HEAD:master || trueimage: name: $CONTAINER_CLIENT_IMAGE: Specifies the
Docker image to be used for the current job.curl -LsXGET ... | grep -m 1 ...: Fetches the latest
Squid version from GitHub releases.echo "SQUID_VERSION=$SQUID_VERSION" > variables.env:
Writes the latest Squid version to the variables.env
file.sed -i "s/{{SQUID_VERSION}}/$SQUID_VERSION/g" README_template.md:
It replaces {{SQUID_VERSION}} placeholder in the
README_template.md file with the actual latest Squid
version.git add README.md,
git commit -m "README Auto update [skip ci]" || true, and
git push https://$GITLAB_TOKEN@gitlab.com/fredbcode-images/squid.git HEAD:master || true:
Commits the changes and pushes them to the repository.This stage builds the Docker images for both ARM and AMD64 architectures. The built images are then tagged and pushed to Docker Hub.
docker-hub-build:
stage: Docker-hub-build
image: docker:dind
needs:
- getsquid_vars
artifacts:
expire_in: 2 hours
paths:
- $CI_PROJECT_DIR
timeout: 3 hours
before_script:
- docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_TOKEN" $DOCKER_HUB_REGISTRY
script:
- source variables.env
- docker build --build-arg SQUID_VERSION=$SQUID_VERSION --pull -t $CONTAINER_BUILD_NOPROD_NAME_AMD64 .
- docker push $CONTAINER_BUILD_NOPROD_NAME_AMD64docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_TOKEN" $DOCKER_HUB_REGISTRY:
This logs into Docker Hub using the provided user and token.docker build --build-arg SQUID_VERSION=$SQUID_VERSION --pull -t $CONTAINER_BUILD_NOPROD_NAME_AMD64 .:
This builds the Docker image using the Dockerfile present in the current
working directory, with a build argument for the Squid version and tags
it with the name specified by
$CONTAINER_BUILD_NOPROD_NAME_AMD64.docker push $CONTAINER_BUILD_NOPROD_NAME_AMD64: This
pushes the built image to Docker Hub.Similar commands are used for the ARM architecture in the
docker-hub-build-arm job.
This stage runs tests on the built Docker images to verify they work as expected. Steps are similar for both ARM and AMD64 architectures.
docker-hub-test:
stage: Docker-hub-test
extends: .services-amd64
before_script:
- apt update && apt install -y curl --no-upgrade --no-install-recommends --no-install-suggests
script:
- export https_proxy=http://$CONTAINER_TEST_NAME:3128 && curl -k https://www.google.fr
variables:
HOSTNAME: squidpipeline
needs: ["docker-hub-build"]extends: .services-amd64: This extends the
.services-amd64 which defines shared configuration for
services.export https_proxy=http://$CONTAINER_TEST_NAME:3128 && curl -k https://www.google.fr:
This command tests if the proxy is working by setting up the
https_proxy environment variable and tries to
curl the Google homepage. If the container cannot access
the web page via the proxy, this command will fail, and the job will
stop.The images confirmed to be working are then properly tagged and pushed to Docker Hub. Steps are similar for both ARM and AMD64 architectures.
push-docker-hub:
stage: Docker-hub-pushtag
image: docker:dind
needs:
- docker-hub-test
- getsquid_vars
before_script:
- docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_TOKEN" $DOCKER_HUB_REGISTRY
script:
- source variables.env
- docker pull $CONTAINER_BUILD_NOPROD_NAME_AMD64
- docker tag $CONTAINER_BUILD_NOPROD_NAME_AMD64 $HUB_REGISTRY_IMAGE:$SQUID_VERSION-amd64
- docker push $HUB_REGISTRY_IMAGE:$SQUID_VERSION-amd64
- docker tag $CONTAINER_BUILD_NOPROD_NAME_AMD64 $HUB_REGISTRY_IMAGE:latest-amd64
- docker push $HUB_REGISTRY_IMAGE:latest-amd64
- docker tag $CONTAINER_BUILD_NOPROD_NAME_AMD64 $HUB_REGISTRY_IMAGE:latest
- docker push $HUB_REGISTRY_IMAGE:latest
variables:
GIT_STRATEGY: none
only:
- masterdocker pull $CONTAINER_BUILD_NOPROD_NAME_AMD64: This
pulls the Docker image to be tagged and pushed.docker tag $CONTAINER_BUILD_NOPROD_NAME_AMD64 $HUB_REGISTRY_IMAGE:$SQUID_VERSION-amd64:
This tags the Docker image with the Squid version and the
architecture.docker push $HUB_REGISTRY_IMAGE:$SQUID_VERSION-amd64:
This pushes the tagged image to Docker Hub.Similar commands are used for the push-docker-hub-arm
job.
In these final stages, the ChatGPT analysis takes place and the readme is updated on Docker Hub.
In chatgpt_analysis, the job uses OpenAI’s GPT API to
generate human-like text explaining how each job in the pipeline works,
then stores an artifact (.md file) and sends it to the specified server
via ssh.
In update_dockerhub_readme, the readme is posted to
Docker Hub using it’s REST API.
In the provided YAML file, several environment variables are referenced. Here are some of them:
$CI_PROJECT_DIR: The full path where the repository is
cloned and where the job is being run.$CI_BUILDS_DIR: Directory where all builds are run.
Defined by GitLab Runner.$CI_COMMIT_BRANCH: The branch for the latest
commit.$CONTAINER_CLIENT_IMAGE: Docker image used for client
tasks.$DOCKER_HUB_USER: User credentials for Docker Hub.$DOCKER_HUB_TOKEN: Token for Docker
authentication.$GITLAB_TOKEN: Token for GitLab authentication.$HUB_REGISTRY_IMAGE: Image registry details for Docker
Hub.$CONTAINER_BUILD_NOPROD_NAME_AMD64: Tag for Docker
image non-production builds for AMD64 architecture.$CONTAINER_BUILD_NOPROD_NAME_ARM: Tag for Docker image
non-production builds for ARM architecture.These variables are used to define images, tags, paths, credentials,
or other information needed for the jobs to successfully complete their
operations. They can be loaded from the GitLab project settings, defined
in script steps or loaded from a file (like variables.env
created in getsquid_vars job).
Jobs in the pipeline can be dependent on one another leveraging
GitLab’s needs: keyword. For example, jobs like
docker-hub-test need the docker-hub-build job
to be completed beforehand because it depends on the Docker image built
in that job. Similarly, the update_dockerhub_readme job
needs the getsquid_vars job to update the README with the
latest Squid version. Jobs are run in parallel if they do not have any
explicit dependencies through the needs: keyword.
Jobs in this pipeline produce various outputs and artifacts. For
example, the getsquid_vars job produces an artifact that is
a file named variables.env containing the latest Squid
version information which gets used in various later stages.
In the chatgpt_analysis job, an artifact is created
which is a markdown file containing an explanation about the CI/CD jobs.
This markdown file is also converted to HTML and is sent to a remote
server.
In docker-hub-build, Docker images are built for the
latest Squid version and are pushed to Docker Hub. These Docker images
are the primary artifacts produced by this pipeline.
The last commit “README Auto update [skip ci]” updated the README
with the latest Squid version and ignored CI using
[skip ci] in the commit message. The update ensures
information accuracy and does not unnecessarily run all jobs.