When I was starting out on my software development path, I landed on a handy little trick.
On the root of the project, I would put in a new filedeployment.md.
On this file, I would write out all the steps I took to deploy the code. This includes even code to ssh
ssh [email protected] Use password: secret
This system worked quite well. After all, I was a lone wolf and everything I deployed needed to work on only one server.
Of course, nowadays the systems we write are not as trivial:
- They require coordination across teams
- Security is a key concern
- Downtime is unacceptable
- Different services run on different servers
With this new conditions, I have learned to embrace a new concept, automated deployment pipelines.
In this entry, we shall be looking at the benefits to be experienced in deploying the new system.
Feedback to all teams involved
In deployment just as in development, you will come across errors, bugs as it were. You may think this is a problem, I don’t think it is, such is the nature of the world. The problem would be not knowing what is going wrong.
Automation helps in the sense the bug happens every single time, not just when the sloppy developer is pushing their code. This makes it far easier to diagnose and thus permanently fix the issue.
Furthermore, each team formally or otherwise has the DevOps expert. But what happens if the person gets sick and the system goes down?
A proper pipeline is a crystallization of this professionals knowhow, it can be used even in their absence. Even better, other devs can explore what he did to add to their own knowledge.
Documentation is a whole lot of fun, isn’t it?
Without a deployment pipeline, there must be documentation guiding the rest of the team on:
- The steps to deploy
- How they will know they have been successful
- The various error states and how to recover
Even if you are able to successfully do this, the work quickly decays and there is no obvious way of seeing this happen.
A CD pipeline is in tune with your code. The moment it breaks, the code can’t go into production. This means:
- The deployment pipeline will always be up to date
- The steps are self-documenting
Basically unit tests on steroids.
Free up time
To carry out a successful deployment. The developer needs to have a working knowledge of :
- Unix commands
- Proxies and load balancing
Beyond trivial websites, the work is not child’s play.
Yet after the first time, it is very repetitive. Which creates the dilemma for you, hiring expensive staff to work on rote stuff and bad for the professional who gets to basically bang their head on the keyboard every day.
A deployment pipeline frees up the devs to work on high-value creative work. You will notice this in the form of higher engagement from them.
Easy to verify
Suppose the work was outsourced, the consultants then give you a working system together with the source code.
What happens if for some reason it went down and you need to restart it?
Sure, you can always call them back in and trust somehow the developer who worked on it is still employed with them and has retained a working knowledge of your system.
Quite the gamble my friend.
Alternatively, they could show you during the demo which button to push to bring the whole system right back up!
In conclusion, automated deployment will make your life much easier. With the rising popularity of containerization and development of great orchestration tools like Kubernetes, you really have no excuse to still be doing manual deployments.
How do you deploy software in your own organization?