When it comes to automation, the first thing that comes to mind is the user interface. But Automation Levels includes several different layers such as the unit level, the service level, and the API level among others. What’s more, there are additional conflicts in automation: one of the main conflicts is determining who is the “owner” of each level and how much should be automated in each of them.
In order to present a streamlined approach, we will focus on 3 levels based on the Cohn pyramid.
The main idea of the pyramid is to focus our automation efforts on preventing errors and not on finding them. This represents a change of paradigm, which ultimately generates more and better benefits.
Interested in automation tools? Check out this blog post about Puppet and improve life at work.
Base Layer: Unit Tests
The best way to lay solid foundations is to build the easiest, quickest, and cheapest tests possible. Perform these tests constantly at the lower level: this creates a constant stream of feedback about the code as you move forward with the project. The main advantage is that bugs are found faster and can be fixed much quicker; unlike UI tests, where those same bugs will be found later and will have become harder to spot.
These tests should reflect the developers’ vision regarding the software and they should be the ones in charge of creating and maintaining them throughout the life cycle of the application.
Mid-layer: Integration Tests
Once the foundations are strong and the software has passed all unit tests correctly, we can focus on integration tests. The main goal of these tests is to verify that all elements of the software work together: with these tests, we can reduce the “gaps” left unchecked by the unit tests.
I hope the following pic helps you understand the flaws that can be detected using integration tests:
Who should be in charge of creating these tests? I think the responsibility is shared: developers know the services, APIs, and other elements they helped create and how they should behave, and testers have a more functional vision; therefore, the best option is to try and leverage this synergy between both roles to build more robust tests.
Top Layer: End-to-end Tests
In 2015, Mike Wacker published his essay, Just Say No to More End-to-End Tests, in which he demonstrates in a practical way that UI (user interface) testing is expensive, both in its creation and in its maintenance. He clearly shows the unnecessary amount of time and number of resources wasted in finding the real bug.
Therefore, it is best to use UI tests only when we need to simulate a real user, and we should be very specific when it comes to defining the scope and the goals of these tests. The number of tests should be minimal compared to the number of unit tests. As a rule of thumb, the percentage distribution should be:
- Unit tests 70%
- Integration tests 20%
- UI tests 10%
The people in charge of creating this type of tests are testers since it is part of their role to think like end users.
In essence, the main idea of this new pyramid is to establish a solid automated testing strategy, which should regard automation as a set of different tests, in different degrees and approaches, trying to generate maximum gains in resources and quality.
Further reading on use cases on the whitepaper Cassandra to the Rescue – Impossible Use Cases in DOCSIS, where the author analyzes two different technical solutions to solve the use case presented on simulated systems.