It all starts during development, at the moment of solving a problem with code. When issues that look complex end up being one-liners.
I encountered an example of this recently, as I was implementing a change. A task that took me a few hours of work was at the end completed by adding - literally - 38 characters to the source code.
One Small Step for the Code, One Giant Leap for the Application.
Don’t get fooled by the puny 38 characters. Even small changes may critically impact the way the entire application works. Take for instance communication interfaces.
When the main access point for our application is its web interface, a change made to the HTTP communication code can disrupt the handling of the protocol, e.g. by malforming requests or responses. A bug like this may easily bring down the entire application.
Too much confidence
After a few successful projects (or in some cases just tasks) it is easy to become overly convinced of one’s amazing skills. No one criticizes our code during Code Review. No critical errors show up in tests. Or even better - someone praises us during retro for delivering a complex application feature efficiently.
Moreover, small changes, especially those little “improvements”, are typically made in existing code, which brings a false sense of security. As a result, we sometimes change code following a gut feeling and don’t think twice about what might go wrong.
The code gets modified, compiled (for the lucky ones among us), or passed through a linter to check the code style and syntax - and more often than not, that’s it. A key habit is missing. The habit of performing regression tests.
H(a/u)ck it, fast!
We talk a lot about quick iteration and rapid value delivery. Often glossing over quality, because time is of the highest importance. Under time pressure, we start making changes instinctively, a condition here, a second one there, modifying logic even when it is time-tested and considered valid.
We often have components that are 100% working and compatible within their own context. Then we start to reuse them further, in codebases that are external to them and create dependencies. Such coupling may not always be easy to loosen later, but to be fast, we often just work with what we have. As a result, we end up with a fragile solution where a change introduced to a single component is often more than enough to cause errors in code that depends on it.
But who has the time to verify if the application starts up correctly, right? Or check if it still works, even if only just the Happy Path?
A small change is not supposed to break anything. But it may impact components in a different part of the system if they are not updated to e.g. use a new interface. Such issues are near impossible to catch without running appropriate tests.
Step by step
How to go about verifying your changes, then? Here’s a simple checklist that I encourage you to follow before releasing code modifications. To Code Review or QA verification, not live environments.
It’s based on six simple steps:
- Run the code through SCA tools
- Run the available automated tests
- Compile the project
- Run the project
- Check the Happy Path of the features you modified or extended
- Verify the nearest integration endpoints that your change may have impacted
As you’ll notice, every step mentioned in the list is a test of an application layer - coding standards, builds, execution, and functionality.
This is a starting point. There may be more elements in your application that have to be verified after each change. Feel free to make your own checklist (for the team or just for yourself), but make sure to go through it every time you release a change until it becomes a habit.
In order to avoid work cycles where time is mostly spent manually verifying changes, it’s a good idea to introduce automation.
Not every project needs to strictly follow Continuous Integration/Delivery/Deployment practices, but the tools that handle those processes can be readily used to handle our repetitive tasks.
In my projects, the first four above-listed steps are handled as code integrity checks and triggered after every push to any branch in the repository.
Most recently, I have been doing this using GitLab CI, which allows me to verify all metrics of static code analysis, run automated tests and build the application in an isolated, repeatable environment based on Docker containers.
Only when all of the four checks are performed successfully, I get to deploy changes into a test environment and perform the remaining two steps (using Docker images and Docker Compose). Not only is this a quick way to focus my regression testing on the most critical aspects, but it also enables me to add Smoke Tests (which can be automated using e.g. Cypress – that I also recommend).
With this approach, I avoid a lot of manual testing in a local environment, as most of the tests are automated. This solution is not ideal, more like good enough, but the key lesson here is that automating testing processes does not require a complex architecture or years of experience.
Furthermore, this will also force you to rethink automation in the context of other parts of the application, such as databases. Needless to say, it is a good idea to avoid the need for manual running of the database update scripts every time the application is automatically deployed to the test environment.
Let’s focus on the three most important takeaways:
Writing software is a bit like solving math problems.
Back in my school days, I once solved all the tasks on my exams and got results that looked good in theory. I was really counting on getting at least a B+. When the grades came back, however, I was shocked to find a barely hanging D-.
Because I did not verify my solutions. I trusted my intuition too much.
Simply put, you need to verify every change you make. Start with SCA tools, tests and compilation then run the application, check the Happy Path and the nearest integration points your change may have affected. Consider Smoke Tests, as they might not only save you time but also give you more peace of mind.
Naturally, you will not always catch every issue with just this process. Sometimes more comprehensive testing will be necessary. But still, even performing just the above-mentioned steps habitually will give you an edge over people who don’t do them.
I like to save time, especially on tasks that I am not very fond of (you know, I really like to write code). Endless manual testing? That’s an easy way to get frustrated. Even more so with complex testing procedures.
Whenever you find yourself repeating testing steps manually, consider automating them. Use tools that require little time and effort, and give you quick results.
And what to do with the time you manage to save? Use it for the tasks that you enjoy in your work, like writing code. The code for automated tests, not just implementation deliverables.
Well-prepared automated tests are exactly what is needed for long-term application development. Good tests are of invaluable help when it comes to making code changes. Whenever a code change is incorrect - you really want to find out about it right away!