5 useful skills for the beginning of a DevOps Engineer’s journey
Over the past few years, businesses have been investing more and more in the DevOps area, which has long since been out of the niche. Many young people without experience wonder what skills they should possess in order to be able to call themselves DevOps Engineers. Although this field is very broad, requiring a kind of expertise in many fields simultaneously, there are five skills that I consider to be the absolute foundation for a DevOps Engineer.
Developing them actually provides a stable grounding for future professional advancement.
Understanding the DevOps process
To understand what DevOps is, it is necessary to simplify the title DevOps Engineer to simply DevOps. But DevOps is not a person. DevOps is the entire process upon which our projects are built, and a DevOps Engineer is responsible for integrating that process, and any other improvements.
The process is endless and consists of alternating stages of Plan, Code, Build, Test, Deploy, Operate, and Monitor.
Of course, the Code stage doesn’t necessarily mean that a DevOps Engineer needs to be a developer — that’s what we call “Full Stack DevOps”. But DevOps Engineers should pay attention to what tools are being used, give their recommendations for improvements, and create a place for code integration for all teams. In the same way, the stage Monitor doesn’t only refer to the application — we should monitor the whole process, project, team, and look for areas that we could improve to make life better for everyone.
This of course means that many of these areas are improved in collaboration with PMs, SMs, Architects, Team Leads, and even individual team members who are the main beneficiaries of the process.
For DevOps, the primary customer is the team, which is expected to benefit from the process. And a benefit to the team is a benefit to their customer as well.
Automation with scripts
A DevOps Engineer must be able to automate their work. Certain activities need to be done sometimes once a day and sometimes even once a year. Automation doesn’t mean that as DevOps Engineers we want to cater to our ‘laziness’ with automation. On the contrary, an important part of the whole process is to minimise situations where something may go wrong.
Imagine a situation where, in a project once every six months, you need to perform the generation of a specific certificate, on 20 servers. The documentation was created four years earlier; everything is described in a fairly precise way. Or so you think. So far, the job has been done by one person, manually, after working hours (to minimise the impact of changing certificates). Additionally, this person has to keep an eye on the certificate expiration dates by themselves.
There are plenty of red flags already at this stage:
- What if they forget to keep track of the dates?
- What if the calendar notification doesn’t alert them?
- What if they get sick? Will someone else cover the documentation that hasn’t been kept up to date?
- Does anyone else even have access to all the servers?
Situations like this shouldn’t happen, but unfortunately, they do, mostly when there is a lack of automation skills.
For the sake of a moment taken to cover this case, a DevOps Engineer would prepare a script that checks the expiration date of certificates, generate new certificates and propagate them to all servers at one time through some Infrastructure, as a Code tool. Thus, no one needs to worry about something going wrong along the way.
Continuous Integration/ Continuous Delivery (CI/CD)
CI/CD is a practice that is another foundation in the whole DevOps process. It is responsible for taking the code from integration to deployment in the final environment. However, it takes DevOps Engineers some time to understand these practices, even though everything is described on the web. Unfortunately, what seems clear to many, in reality, can be highly simplified. Unfortunately, what seems clear to many, in reality, can be quite complicated.
Many factors affect the quality of these practices, and ultimately, the quality of the project as well.
Early detection of defects by checking integration and code quality, running automated tests, not only saves developers’ time but also ensures that what is deployed to the environment will not fail in any way. And in the case of any issues, the development team will know about them as soon as possible.
CI/CD relies heavily on the ability to automate through scripting, as there can be no mistakes in the entire process that the application will go through. Also, the source code of the application must be properly integrated and built. If we drop finished code into a test environment, we cannot allow it to be modified and sent to another environment immediately after testing has been completed.
The compiled code must always pass between environments without recompilation, and the environments must also be as close to each other as possible. If we change a factor, CI/CD practices are not complete, and we risk releasing a faulty product — which, after all, is not the point of using it.
Everything as a Code
With a GUI, mouse, and keyboard, we can often overuse the benefits of quickly “clicking” things out. Clicking “Build” in the IDE, dragging files to the server; changing the image version from the droplist on the environment; even entire CI/CD pipelines can now be clicked out to your liking. However, do we know who changed it and why? Sometimes yes, sometimes unfortunately no.
CI/CD pipelines or whole infrastructures presented in code have several advantages.
First of all, we can keep them in the code repository, where we know what, when, why, and by whom the relevant modification was made. We also know what the modification was all about. In the case of Infrastructure in Code, we can additionally make changes in the application dependent on changes in the infrastructure. Going further, we might as well simply create entire environments from scratch that are even 1:1 to other complex environments, simply by using the command line benefits and previously prepared templates.
The investment in writing everything in code makes it easy to track changes and duplicate the work if you really need it. Try to click out an entire environment from scratch or a new pipeline, when time is of the essence — code will always be faster than the mouse.
Cloud makes you great
More and more the world has become cloud-based, and the DevOps process seems to be perfect for such a world. Clouds are also constantly evolving, creating new services, updating older ones, and as such, you will always need someone to stay up to date on the topic. This task often falls to DevOps Engineers, and companies are relentlessly looking for such people when recruiting. Well, let’s be honest, a DevOps Engineer, of all people, will know best what pieces will work well with the whole process.
Unfortunately, entering the world of clouds as a DevOps Engineer requires a lot of work and sometimes even money. Cloud giants admittedly have trial accounts, but not all services are included, which generates additional costs. Fortunately, cloud leaders realise this and launch free e-learning platforms, i.e., Azure on Microsoft Learn.
Golden advice from me: Master one cloud and the transition to another will be fairly easy.
The differences aren’t as great as most people make it out to be, and whatever the case, the most important thing is to be able to navigate the documentation anyway. With basic skills, the cloud is wide open for you.
Words by Piotr Trautman, DevOps Engineer at Altimetrik Poland