Ship it!

In this post, I will try to shed some light on how we tackled our continuous integration and deployment workflow with Github Actions and OpenShift.

Sidney Widmer
dreipol

--

Visit www.dreipol.ch to learn more about us.

Everything was better in the old days — or so they say. Open FileZilla and drag & drop your folder full of PHP, HTML and CSS files to your server and you’re good to go. It sounds simple but the more you think about it the more complex it gets: red/green deployments, asset minification and optimization, code quality checks, rollbacks and many more features are necessary for a deployment strategy of a certain size and quality.

The whole process looks roughly like this:

  1. git push to the main branch
  2. running build, test, notify and deploy jobs on GitHub actions
  3. running an OpenShift deployment which triggers database migrations and handles assets

Since every deployment triggers a message in our internal Slack channel, a glance at said channel shows that the above-mentioned actions were triggered around 230 times in March alone. This means there were on average 7 to 8 deployments to any one of our many projects per day. Let’s break the above overview down and see what really happens in each step.

Deployment flow visualized. Source: Own illustration.

1. Git Push

It all starts with a git push to one of the branches connected to a deployment. Normally these are stage and main. We use different branching strategies depending on the project but most of the time a developer creates a pull request from his or her respective feature branches which will then be merged after a review — a deployment is born.

2. GitHub Actions

GitHub Actions were released in 2018 and we instantly knew that this could greatly improve our deployment process. It took another year though for dreipol to fully commit (pun intended) to them and move away from CircleCI. In short, actions are defined in a YAML file directly in your project’s repository and allow you to trigger a series of commands after an event has occurred, in our case a push to the repository in question. They consist of Workflows, Events, Jobs, Steps and Actions.

We developed 12 custom actions, which are basically small scripts run in their own Docker container. Together they form the steps in our 3 jobs: build, notify-slack and deploy-prod. All jobs are run on the ubuntu-latest runner.

Actions have access to specific environment variables defined in the workflow. These can be extracted from the repository secrets. The introduction of organization-wide secrets helped us to greatly reduce the complexity of managing these sensitive values. A typical example would be the Slack API key which allows our actions to post updates to a specific Slack channel.

build ->

  • Triggered for every branch
  • Check out the current branch
  • Log in to Google Cloud with the gcloud CLI and pull an existing image for the current branch (if any)
  • Build a new image from the current codebase and use the existing one for possible cache benefits
  • The projects Dockerfile includes everything necessary to run the project like installing front-, backend- and system dependencies or optimizing static assets
  • In the now built image, we trigger our test suite which includes any kind of tests from the backend and frontend depending on the tech stack used in the project(thanks to service containers triggering integration tests that rely on a database is trivial)
  • If all tests and checks (e.g. extracting licenses from used third party dependencies and validating the usage) are green, the new Docker image is pushed to our Google Cloud Registry
  • If anything fails our own dreipol/github-actions/slack actions are triggered and inform all developers about the error
  • Reacting to success or failures is again super simple thanks to GitHub’s job status check functions

notify-slack ->

  • If we detect a push to either stage or master, we’ll again use our dreipol/github-actions/slack action to notify all developers about a new incoming deployment

deploy-prod ->

  • Each environment has its own deployment step, typically there’s deploy-prod and deploy-stage
  • Log in to Google Cloud pull the latest image (from the build step) and push it to our OpenShift container registry hosted by aspectra

3. OpenShift

The last piece in our deployment process is OpenShift which orchestrates and serves all of our containerized projects. Once a new image is detected (pushed by the deploy-prod job mentioned above) a new deployment is triggered. This means spinning up new pods with our updated image and rerouting incoming requests to our updated codebase. Before the actual switch is done, two important things happen:

  1. Database migrations are triggered
  2. Assets are uploaded to our managed MinIO (S3 compatible file storage) instance. Every request to one of our projects is first routed to a reverse proxy which reroutes requests to static assets directly to the Minio service
  3. If any of those pre-hooks fail or the new image can’t be run for other reasons, the deployment is automatically rolled back and the «old» version is served again. Since the deploy-prod job has access to the deployment via the OpenShift API potential failures are reported in our Slack channel.

Advantages

A big advantage of our current deployment is that it doesn’t care about the technologies used in our projects. As long as we push a git repository with a Dockerfile and a GitHub actions workflow everything will just work ™. We’re currently running services with Python, Node and PHP and all use more or less the same actions and OpenShift templates.

Disadvantages

Every coin has two sides, and we’re no exception to this rule. There are a lot of moving parts involved and debugging can sometimes be really challenging. If, for example, an asset is not correctly served it could be anywhere from the build in the Dockerfile, a wrong image being served by OpenShift, a faulty config in our reverse proxy, or just a caching issue on the application layer.

. . .

Feel free to like this post, share it or follow me or dreipol on Social Media:

twitter.com/sidneywidmer
twitter.com/dreipol
www.dreipol.ch

--

--