Designing a Go Continuous Delivery Pipeline

Designing a Go Continuous Delivery Pipeline

(Last Updated On: July 25, 2019)

In this post, we will be designing a go continuous delivery pipeline that will will automatically test, build and deploy go applications to Kubernetes.

This post is the result of trying to put into place a process to enable best practices for developing Go applications for production deployment.

What are the best practices for developing applications you might ask?

I started by reading two books:

This first thing that stood out to me with both of these books is that they outlined exactly the problems I have been experiencing while developing applications.

Some of these problems included fragile monolithic applications that had several months of ramp-up time to be able to code changes.

Another problem was manual and error-prone deployments.

One big issue was that we would spend weeks getting the application to work in our QA environment only to find that it would not work when we deployed to the same application version to production.

By putting the best practices in place as described in these books, most of these problems have been negated.

Deployments are now smooth and with much less stress on the team.

I highly recommend that you get these books and know them inside and out if you want your development experience to be less stressful.

Version Control

Central to our discussion on building a pipeline is version control.

Everything that we do with our application needs to be submitted to version control like git.

This will include our application source code, continuous deployment pipeline configuration, documentation and environment configuration.

This will ensure that we can easily configure environments for testing, rollback our deployments faster and more.

I will not go into detail too much in this post on the best practices for using version control as this will most likely be my next post.

For know just know that everything will be submitted to version control.

Our Deployment Pipeline in a Nutshell

We need to design a pipeline that will perform some steps in order.

First, we have to test our application.

Testing

There are many different types of testing.

The first type of testing that we need is unit testing.  Unit testing tests the code at isolation.  These tests should be very fast and not require any interaction with other services like databases.

Next, we need to perform component testing.  This is also known as integration testing.

Component testing is where we test to ensure that the code interacts well with other components.

This is where we would test that the code works as expected with a database backend.

Finally, we can utilize deployment testing.

This is testing that ensures that a deployment of the application worked.

The testing will take place in different parts of our pipeline.

Building

Building our application is naturally part of the pipeline.

But we need to consider how to build our application.

We will compile our Go application as always but how to take the resulting binary and deploy it easily in an automated fashion?

Docker containers are our answer.

During the building phase we need take our compiled binary and create a docker container to deploy it.

The first article of this series covered how to do that using a small container footprint: Deploying Go applications using Docker containers

This will need to be part of our pipeline as well.

Finally, during our build phase, we need to generate any documentation for the application and put it in the right place.

Since we are talking about creating a pipeline for go applications we will be using godoc for our documentation.

This will parse our Go code and generate HTML documentation that we can host somewhere for easy reference.

If you are developing a web API in Go (which in this series we will be), then you will need to document your API.

I will be documenting my APIs using GoSwagger which builds Swagger API specification docs by parsing our code.

The documentation will be hosted inside the container that we build containing our application.

This makes it easy to write other applications that interact with our API.

Deploying

Now that testing and deploying are completed we need to deploy our application.

Our application is now in a docker container and we need to deploy it somewhere for deployment testing and acceptance testing.

In my environment, I setup a local Kubernetes cluster that acts as my QA environment for testing applications before they go to “production.”

The production environment is Google Cloud and will be accessible to the world.

Our pipeline will need to take the docker container that we built and deploy it automatically to our Kubernetes cluster.

If the application is already deployed then we need to have the pipeline update the image so that the latest version (the version we just built) is deployed.

Pipeline Design First Draft

Now that we have all the pieces we will need for our pipeline we can sketch out what our pipeline will look like.

I use LucidChart for quickly drawing up diagrams

Let’s look at the sketch I made for this pipeline:

Here we can see that we start out with the Unit Tests.

If any of the unit tests fail we get immediate feedback.

This is a very important part of our DevOps process.

We don’t have to wait until the application fails in our QA environment to notice a typo bug.

Since our unit tests are fast we get immediate feedback that something is wrong.

Once unit tests complete successfully, we start the component tests.

These can take a little longer to process since we have to stand up and teardown mock services for testing.

After component tests pass we proceed to build our documentation.

Next we move on to the promote phase of the pipeline.

All of our work so far has been in a ‘develop’ branch of our code that is used for developing changes to our code.

After all the preliminary testing is complete we can merge our changes into the master branch and tag it with an automatically generated version number.

This will be covered in the next post in this series where I talk about version control and our pipeline.

Once our application has been promoted to the master branch we then compile our code into a binary and build a docker container.

Finally we deploy our application to Kubernetes.

Armed with this pipeline we can ensure that our application will be robust and easy to maintain.

Conclusion

Now that we have our pipeline designed we can now go about coding our pipeline for Concourse.

For the adventurous here is a peek at the pipeline as it exists today:

You can view the source code here: https://git.admintome.com/bill/managegamedata

If you have enjoyed this post, please subscribe to the AdminTome Blog Newsletter and comment below.

Go Programming Tutorial

I have created a free tutorial for learning Go available at AdminTome Online Training.  Check it out and grow your development knowledge now!

1 thought on “Designing a Go Continuous Delivery Pipeline”

  1. Pingback: Designing a Go Continuous Delivery Pipeline | AdminTome Blog – Golang News

Leave a Comment

you're currently offline