Traditionally, DevOps QA testing revolved around taking a new functionality, implementing it in a predetermined environment where within an agreed timeframe functional and regression tests were carried out. For a while now, we have been changing and adjusting this process, taking small but decisive steps towards new horizons.
This content was written by one of our employees, who helps us build our amazing team. Show off your wisdom and join our team!
DevOps Quality Control Best Practices
- Clear quality requirements
- Measurable metrics
- Cultural shift
- Think automation
- Continuous integration
Clear Quality Requirements
From the start, we must have well-defined quality requirements that the project must meet. Quality assurance should focus on providing the best user experience and not the perfect piece of software.
Measurable Metrics
Projects must define metrics to measure their quality. These metrics must detect flaws in the software from the beginning of the development cycle.
Cultural Shift
We shouldn’t only talk about quality. Individual and collective objectives should focus on quality, showing a cultural shift in the organization, encouraging the culture of quality.
Think Automation
Everything should lead us to think about the use of automation tools: all tests that can be automated, prioritizing critical sections of the software.
Continuous Integration
The more we test our code automatically, the more likely we are to minimize risk, reduce costs, and speed up delivery times. Continuous integration has to be part of the foundations of our culture.
Our Approach to DevOps QA at Intraway
The first step is to unify basic testing criteria at the story, task, and bug levels.
Definition of Standards
Stories
The “testable” field was added to determine which stories are going to be tested and the required steps defined for that label.
- Create test cases.
- Create at least one test cycle.
- Execute the test cycle(s).
- Automate critical cases.
Test cycles bring about two important factors:
- They show the scope of testing in the sprint at the user-story level: if we had, for example, a single cycle for all user stories and one of them could not be closed, the execution cycle would cover the entire sprint, WIP.
- They make it possible to determine the efficiency of testing quickly. For example, it becomes evident in a scenario where we have 10 stories, with 10 cycles and we could only run 3 cycles during the sprint.
Bugs
There are some critical flavors when referring to bugs.
Bug Source “Testing”
All bugs detected at the testing stage, regardless of the person who reports them, should undergo the bug source “testing”.
Link Bugs to Stories
We will use two relationship types:
- “Story is failed by Bug” when a bug prevents the story from being resolved.
- “Bug is caused by Story” when the bug was caused by this story, but the bug does not prevent it from being resolved.
Testable Label
If a bug is tagged as “testable”, it will be considered as a story; therefore, we will follow the same steps:
- Create test cases.
- Create at least one test cycle.
- Execute the test cycle(s).
- Automate reproduction of bug.
Automation Standards
Tasks
At first, we could not know in real numbers how much time we spend creating tests at the integration level and during the ATP. That is why we generated an automation type task, which shows the effort involved in generating tests so, in turn, it can help us reduce the amount of time spent creating and managing tests.
Tools
In order to define our tools, we tried to find a framework that would allow our code to be reusable, maintainable, and stable. So we chose to focus on two types of tests, UI (user interface) and integration.
- UI tests:
We use a framework called Protractor, which has the benefit of being developed for Angular and AngularJS applications. Protractor generates a layer on the web driver API, adding its own functionalities for angular. The use of promises and its versatility to complement other frameworks such as Jasmine and Mocha provides us with full syntax, scaffolding and reporting tools.
- Integration tests:
In this stage, which we believe is critical in our applications, testing tends to be faster and much more reliable than the UI tests.
SoapUI gives us the versatility to test SOAP services as REST, with a user-friendly interface that allows us to create suites, tests, assertions with regular expressions, mocks to simulate behavior, etc.
Testing Automation
In order to implement DevOps QA quickly, the first idea that might spring to mind is to “automate” no matter how; just to automate test after test, generally UI tests, without a clear strategy in sight. That path leads to imminent failure.
To avoid failure and to generate more and better feedback on how our applications behave, Cohn’s Pyramid is a great alternative.
Pyramid
In brief, the pyramid helped us understand that the best way to generate that feedback is to automate our applications in layers. We took that strategy and implemented it in our continuous integration tool, generating 3 well-defined stages, shown below.
As a rule of thumb, the percentage distribution should be:
Unit Tests
The best way to lay solid foundations is to build the easiest, quickest, and cheapest tests possible. Performing these tests constantly at the lower level creates a constant stream of feedback about the code as you move forward with the project. They are executed in step one of the “Applications” workflow of Continuous Integration.
Smoke Tests
Here, we do not try to test the entire system exhaustively, but we try to analyze the more critical functionalities: verify that the connections are made correctly, that the application is launched correctly, that the login works, that the main button of the application works, etc.
Integration Tests and ATP Tests
Integration
The primary goal of these tests is to verify that all elements of the software work together. With these tests, we can reduce the “gaps” left unchecked by the unit tests.
ATP tests
It is best to use UI (user interface) tests only when we need to simulate a real user, and we should be very specific when it comes to defining the scope and the goals of these tests.
For further information, refer to Automation Levels and the Ideal Test Pyramid
Monitors
According to the previous point, we have monitors to see the 3 levels: unit tests, smoke tests, and ATP tests. The last two types are generated automatically through a plugin of our design.
Penetration Testing & Vulnerability Assessment
After an analysis of most of the automatic tools on the market, we will use ZAP to generate tests of pen test and vulnerability of the frontends, which analyze:
- Buffer Overflow
- Code Injection
- Command Injection
- Client Browser Cache
- Cross Site Scripting (reflected and Persistent)
- CRLF Injection
- Directory Browsing
- External Redirect
- Format String Error
- Parameter Tampering
- Path Traversal
- Remote File Include
- Server Side Include
- SQL Injection
Penetration tests should be planned to run before each release. We are working on an initiative to automate pen tests along with ATP tests.
Examples
Metrics
We keep track of a few KPIs that can summarize the performance of a development team. Every KPI is measured based on a Sprint. We take into account both manual and automatic testing.
The main KPIs that we measure are the following:
Manual
% of coverage Stories
- Description: We measure the number of Stories that have test cycles generated and ready to be executed.
- Objective: Generate visibility of the scope of testing in the sprint at user story level and determine the efficiency of the team quickly.
% bug early detection
- Description: We calculate the number of bugs found in the testing stage, compared with the other stages.
- Objective: We want to reduce the cost of our bugs, finding them mainly at an early stage.
Automation
% of UI tests
- Description: We measure the number of new automatic tests that were generated, both ATP, clearly focused on 3 layers of the pyramid.
- Objective: Generate a network of containment reducing errors, improving security and freeing up our resources so that they can focus on the work of greater value.
% of hours dedicated
- Description: We calculate the number of hours we dedicate to automatic testing, how long it takes us to automate in each layer
- Objective: Measure team productivity with respect to automated tests.
This content was written by one of our employees, who helps us build our amazing team – join our team!