The defined jobs are organized in stages that broadly indicate their role in the pipeline. They follow this order:
hadolint: Ensures proper Dockerfile syntax and best
practices.getsquid_vars: Fetches the latest version of Squid from
their GitHub repo and sets it as an environment variable.docker-hub-build: Docker image for AMD64 are built and
pushed as temporary version.docker-hub-build-arm: Docker image for ARM are built
and pushed as temporary version.docker-hub-test: Evaluation of the image functionality
(for AMD64)dive: Evaluation of the build image, checking the size,
and determining ways to make it more efficient (for AMD64)docker-hub-test-arm: Evaluation of the image
functionality (for ARM)dive-arm: Evaluation of the build image for ARM,
checking the size, and determining ways to make it more efficient (for
ARM)SquidParseConfig: Verifies the squid configuration and
catches any potential errors.push-docker-hub: Tags Docker images with their Squid
version (for AMD64) and pushes them to Docker hubpush-docker-hub-arm: Tags Docker images with their
Squid version (for ARM) and pushes them to Docker hub.chatgpt_analysis: Contacts the ChatGPT endpoint, which
then generates a step-wise explanation for the jobs defined in this
pipeline and saves it as an artifact.update_dockerhub_readme: Updates the readme for the
Docker hub repo for this image with the latest README via the Dockerhub
API.The hadolint job uses a ‘Hadolint’ Docker image to
analyze the Dockerfile for this project and ensure it follows best
practices and proper syntax. It’s typically the first step in the
pipeline because it preemptively checks for errors and prevents the
building and publishing of inappropriate Docker images.
- hadolint --ignore DL3008 Dockerfile This command uses ‘Hadolint’, a popular Dockerfile linter, to analyze
the Dockerfile. The --ignore DL3008 flag specifically tells
‘Hadolint’ to ignore the DL3008 rule, which is about pinning versions in
apt get install.
‘Hadolint’ will exit with a non-zero status code if any unignored issue is found, which will then fail the job and by extension the pipeline – preventing the building of majorly flawed images.
The getsquid_vars job is responsible for fetching the
latest Squid version from their GitHub repo and updating the environment
variable SQUID_VERSION, which is then used by jobs down the
pipeline. It also updates the README with the latest Squid version.
- export SQUID_VERSION=$(curl -LsXGET https://github.com/squid-cache/squid/releases/latest | grep -m 1 "Release" | cut -d " " -f4 |tr -d 'v')
- echo "SQUID_VERSION=$SQUID_VERSION" > variables.env The first line fetches the latest release version of Squid from their
GitHub repo using curl, parses the output with
grep to extract the version, and stores it in the
SQUID_VERSION variable. The second line writes the value of
SQUID_VERSION to a file named variables.env,
which can then be sourced by other jobs to set their environment
variable SQUID_VERSION.
The docker-hub-build and
docker-hub-build-arm jobs are responsible for building
Docker images for AMD64 and ARM, respectively. They use the
Docker-in-Docker (dind) service in GitLab CI/CD to accomplish this.
- source variables.env
- docker build --build-arg SQUID_VERSION=$SQUID_VERSION --pull -t $CONTAINER_BUILD_NOPROD_NAME .
- docker push $CONTAINER_BUILD_NOPROD_NAMEIn the script section, source variables.env sets the
environment variable SQUID_VERSION to ensure the Docker
image is built with the right Squid version.
docker build ... instructs Docker to build an image using
the current project’s Dockerfile, and
--build-arg SQUID_VERSION=$SQUID_VERSION pass the
SQUID_VERSION value into the Dockerfile as a build-time
variable. Finally, after the image is built, it is pushed to Docker Hub
using docker push.
The docker-hub-test and docker-hub-test-arm
jobs are responsible for testing the functionality of the build Docker
images. They install curl and then try to access google.com
through the squid proxy.
- apt update && apt install -y curl --no-upgrade --no-install-recommends --no-install-suggests
- export https_proxy=http://$CONTAINER_TEST_NAME:3128 && curl -k https://www.google.frThe first line installs curl, a command-line tool for
sending HTTP requests. The second line sets the http_proxy
and https_proxy environment variables to use the squid
proxy and then sends a GET request to google.com. If the squid proxy is
properly set up and working, it should be able to reach google.com and
receive a 200 OK response, otherwise, the job will fail.
The dive and dive-arm jobs are used to
inspect the Docker images, specifically to determine their size, and
identify potential ways to make it more efficient. They do this using
the ‘dive’ tool, which provides a detailed breakdown of the space taken
up by each layer of the Docker image.
- docker pull $CONTAINER_BUILD_NOPROD_NAME
- dive $CONTAINER_BUILD_NOPROD_NAMEAfter pulling the Docker image from Docker hub, the ‘dive’ command is run to inspect the Docker image. It outputs a detailed analysis of the space used by each layer of the Docker image.
The push-docker-hub and push-docker-hub-arm
jobs are responsible for tagging and pushing Docker images to Docker
hub. The images are tag with the Squid version.
- source variables.env
- docker pull $CONTAINER_BUILD_NOPROD_NAME
- docker tag $CONTAINER_BUILD_NOPROD_NAME $HUB_REGISTRY_IMAGE:$SQUID_VERSION
- docker push $HUB_REGISTRY_IMAGE:$SQUID_VERSIONFirst, the job imports environment variables from the
variables.env file to source the SQUID_VERSION
variable. Then, it pulls the image built in the
docker-hub-build job. Once the image is pulled, it’s tagged
with the SQUID_VERSION using docker tag ...,
then pushed to Docker hub using docker push ....
The chatgpt_analysis job fetches a detailed
analysis/explanation of each job in the pipeline from the
openai API. The output of this job is a markdown file that
explains each CI job in detail.
# series of commands to extract the content of jobs in the pipeline, the last commit and form JSON structure payload for API call
- JOBS_CONTENT=$(cat .gitlab-ci.yml gitlabci/*)
- LAST_COMMIT=$(git log -1 --pretty=format:"%h %s%n%b")
- CONTENT=" ... " # Text content for ChatGPT to analyze
- JSON_CONTENT=$(jq -n --arg model "gpt-4" --arg content "$CONTENT" ...)
# call to `openai` API
- RESPONSE=$(curl -X POST https://api.openai.com/v1/chat/completions ...)
# Process and write the API response to a markdown file
- echo $RESPONSE > chatgpt_analysis_$(date +%Y%m%d).mdThe update_dockerhub_readme job updates the README file
for this Docker image’s Docker hub repository.
# forming JSON payload
- README_CONTENT=$(cat README.md)
- PAYLOAD=$(jq -n --arg desc "$README_CONTENT"...)
# API call to dockerhub login and getting token
- TOKEN=$(curl -v -s -X POST -H "Content-Type:application/json" -d ...)
# API call to update the Docker hub repository's README
- curl -X PATCH -H "Authorization:JWT $TOKEN" -H "Content-Type:application/json" -d "$PAYLOAD" ...It reads the contents of the README.md file, forms JSON
payload of the readme content, fetches the login token by calling
dockerhub API and then uses that token and JSON payload to update the
README via the Dockerhub API.
The pipeline uses several environment variables:
SQUID_VERSION is used to specify the version of the
Squid software the Docker image should be built with. This is
dynamically set by the getsquid_vars job using
echo "SQUID_VERSION=$SQUID_VERSION" > variables.env.DOCKER_HUB_USER, DOCKER_HUB_TOKEN,
DOCKER_HUB_PASSWORD, DOCKER_HUB_REGISTRY and
SSH_NOSTROMO_KEY are used to log in to Docker Hub with your
Docker Hub credentials and SSH key for scp. These should be set in the
CI/CD variables for the GitLab project.CONTAINER_BUILD_NOPROD_NAME,
CONTAINER_BUILD_NOPROD_NAME_ARM,
CONTAINER_BUILD_NOPROD_NAME_AMD64,
CONTAINER_CLIENT_IMAGE and CONTAINER_TEST_NAME
are all related to the Docker build process and naming convention of the
built images.References to .gitlab-ci.yml and
variables.env are made for reading or sourcing environment
variables used in building and pushing of Docker images.
Most jobs have dependencies defined by needs:
keyword.
docker-hub-build and docker-hub-build-arm
need the getsquid_vars job, because they use the
SQUID_VERSION variable which is set by
getsquid_vars.docker-hub-test and docker-hub-test-arm
depend on their respective build jobs, since they need the image built
by those jobs.push-docker-hub and push-docker-hub-arm
depend on getsquid_vars for Squid version and on their
respective test jobs to ensure that they push only properly functioning
Docker images.Various artifacts are generated at different stages of the pipeline:
variables.env: Created by the
getsquid_vars job and contains the
SQUID_VERSION variable.chatgpt_analysis_YYYYmmdd.md: Created by the
chatgpt_analysis job and provides an in-depth explanation,
in Markdown format, of the entire pipeline, as generated by the ChatGPT
model.$CI_PROJECT_DIR &
$CI_PROJECT_DIR/chatgpt_analysis*: Artifacts from the
chatgpt_analysis job which provide an archive of the entire
project structure and generated ChatGPT analysis files. These artifacts
are kept for one month.The commit 837a374 README Auto update [skip ci] is about
auto-updating README with the latest Squid version. This update is made
in the getsquid_vars job where after fetching the latest
Squid version, the README_template.md is updated with that version and
the date, then this template is copied to README.md and committed to the
git repository. The [skip ci] flag denotes to GitLab CI to
skip running the pipeline for this specific commit.