The ideal web software deployment process lets you quickly deploy features as they are ready and tested. I previously wrote about continuous-deployment previously, and most of those thoughts remain. I do think that continuous deployment can have some other issues beyond automated testing that make it a challenge, in this post I want to focus more on the human feedback loop opposed to the automated one.
We had each of the above cases, at Offgrid-Electric. All the cases point towards a staging environment. In normal feature development flow is a time where you want share your work, which should be deployed and available to some set of users. Let’s explore how a single staging doesn’t solve everything.
The first thing people often do for QA and stakeholders is setup a staging server. When a feature is complete, you can merge it to staging and deploy for feedback. While this flow works fine for a small number of features and developers, it runs into issues as you scale the team.
At this point sometimes folks setup multiple staging servers or use tools like xip.io/ to share developers boxes. These solutions are bandaids for and only solve small pieces of the problem. In the end, you want to be able to take any branch and dynamically create a deployed environment and keep it around as needed. This gets beyond the simple continuous deployment of
dev branch to the staging server and
master to production. How do you scale continuous deployment?
With the cloud and related tools, it isn’t much harder to go from continuous deployment of staging and production to dynamic branch based staging servers. We do this with Docker and AWS tools, but there are many ways to automate configuring an entire environment and deployment. I will cover some of the challenges a bit later, as there are a number of “gotchas”.
A developer starts a feature branch from
master. They work on the feature say
feature/new_cool_page, and check it into git. The team CI (Continuous Integration) server is already running the tests on each commit. At some point, the developers are ready to share and get external feedback from stakeholders, PM team, and QA. The developers can run some command
create_branch_staging. Which will stand up a new server deployment using their branch and return a URL based on the branch name
new_cool_page.staging-domian.com. From then on future git pushes can deploy to the feature branch if CI finds a matching staging environment, or you can continue to deploy to the feature staging server manually.
Once you have a feature on a viewable staging branch, there can be a lot of feedback and discussion among the team. Often PRs with comments from devs will link to issues on the feature staging server. When everything is finally ready to go, you merge the branch to
master and deploy to production. Destroy the feature staging environment
destroy_branch_staging and celebrate another victory!
We still manage a “shared” staging server as well, this is for any features that might need to sit and “bake” with additional usage for awhile (think major library updates or infrastructure changes), alternatively in that case the feature branch is just collapsed to our old
dev branch and deployed to staging.
The process enabled by being able to create multiple environments dynamically solves most of the issues with a shared staging server. It significantly increased our velocity. Increased the confidence of our smaller deploys, and made it simpler to prototype and throw away ideas. Beyond the developer process. It additionally has other nice benefits.
While this sounds great, it does bring up some additional challenges from having a more traditional CI deployments to staging and production environments. It isn’t quite as simple as the prior case of deploying to single environment that was often configured by hand and had it’s data smoothed over the ages. Think about all the dependencies that go into running your app and how they relate to one another, such as how you seed your staging environment data. Here are some of the challenges we faced.
shared-staging data sources. This is ideal for UI only changes and requires far fewer resources.
shared-staging very close to production. We run far fewer web-workers, with less CPU and memory for most of our dev stacks. We can adjust if needed. Along with Docker we run multiple deployments on the same servers.
feature-staging streams at the very least, often including
BRANCH_NAME for any metrics / alerts
docker:console which will create a docker task and connect it to your environment and SSH you over. Also, keep it simple we rely on a lot of battle tested AWS tooling.
We worked with the very talented Trek10, to develop our production and staging infrastructure as well as our deployment process. We leveraged a few technologies, some open source, some standard AWS tooling, and some custom scripts or services we or Trek10 developed.
The basic tech stack:
As for the detailed specifics, we will save that for another post. Hopefully, something our partners Trek10 will cover in much more detail soon.
In a perfect world, you would never have to think about this stuff and environments would just appear when and how you need them. With no concern for the cost and complexity of getting there. That isn’t the case but depending on your project and how early you start to automate away manual tasks, it is easier than ever to get pretty close to the ideal. You need to look at your project to see if it makes sense, how often are you having a real issue with features getting in each other’s way, does your organization have significant stakeholder feedback or Human QA against features? If you haven’t shipped anything and aren’t worried about zero downtime deployment, again not worth your time. I think the cost of getting this kind of staging and development process was too high a few years ago, but it can be built out for many projects in a cost effective manner with enormous benefits. Is it the holy grail, no, but it is something I would have a hard time living without anymore.