What tests should we do to our software and for what?
“The tests are not optional.” This seems to many a truism, remains one of the pending issues in the world of software applications development today.
Yes, incredibly there are still many “metal” colleagues who are not aware that programming without tests is not only like doing trapeze acrobatics without a safety net, but also a source of errors, bad practices and anxiety.
And for that reason I want to review the basic fundamentals of the tests that we should apply, each one in its necessity, to our developments.
Why do tests?
Testing is the way to make sure that what we want our program to do, do it, and do it without mistakes.
Having overcome, decades ago, the human capacity for apprehension and memorization; which necessarily implies that failures and errors are inevitable if we try to avoid them with only our human capabilities.
Who has not passed that he has left his code half a year in a drawer, and after returning to touch him have the feeling that someone else has written it? We do not recognize our own creature. And let’s not talk when we are integrated into a team, or receive the “gift” of supporting or evolving an inherited code.
For this reason, the tests are essential, since they allow us to guarantee that the applications fulfill the functionalities that are expected of them and the expectations of quality (not only of code); helping to find those errors or defects that have not yet been discovered; reducing the cost of development, the cost of ownership for users; and develop customer confidence by avoiding annoying regression errors.
Not to mention the feeling of incremental security that is obtained the closer we are to a deployment, because the more code we have, the more proofs assure us (in the form of a tight mesh) that everything works correctly.
DevOps and the inheritance of automation
The arrival of Agile methodologies, since the 90s of the last century, was a revulsive in the organization and execution of tests within Waterfall processes.
In the latter, it could be generalized, the tests were mainly manual; meticulously defined in voluminous test plan documents; and that they were done only once the coding of the software was finished.
Xtreme Programming, on the other hand, put a lot of emphasis on automation and on the concept of tests oriented towards the prevention of the end of the 80s; marking in this way, the future Agile philosophy. For this reason, we currently use test frameworks that allow the majority of tests to be carried out automatically in all areas of the application.
And more when you want to adopt the concept of Continuous Integration, as an essential part of DevOps, where the construction and validation of Build itself through all kinds of automatic tests is an inherent part of the process.
This being even more critical at high levels of maturity where we would apply automated deployment or, even, continuous.
The importance that the tests have gained has been such that the very way of coding the software has also undergone profound changes. The birth of TDD (test-oriented development) and its way of subjecting the code to the tests, implies that making software testable is an essential requirement in the quality code.
And, even if we do not use this advanced development technique (which is not easy), the goal of being able to automatically test our code has reinforced such important practices in object-oriented programming as SOLID.
Continue Reading: Ethics and code: “I programmed this missile or I was out of work”
Automated VS Manuals
We have a first big division in the world of testing between automated and manual. As its name indicates, the former depend on a testing tool that implies, in almost all cases, a language or subset of the language itself. That is, if I do them in nUnit it will be very difficult to pass them to MS Test.
Manual testing requires human interaction. The tester puts himself in the role of the user role to be validated, and performs all those operations that are defined in a test plan, or seeks “tickling” the system to get there where no “luser” has arrived previously…
As you can see, both types of tests are complementary and important to guarantee quality software. Automation is fast and you can test many subtle variations in the data; you can also easily repeat the tests as the software evolves; and because it is executed by the system, fatigue and errors that sometimes accompany repetitive tasks are avoided.
On the other hand, although manual tests generally take longer to execute (since they are performed by a person), they often require much less configuration time. It is a good option for tests that only need to be run occasionally, or in cases where the cost / time of the automation configuration exceeds the benefits.
A universe of types of evidence
Following the steps of the inherent complexity of our industry, the tests also suffer from an endless myriad of types, versions, evolutions and classes. But let’s focus on the most important and essential, according to each case and context.
Unit test: Unit tests are automated tests that verify the functionality in the component, class, method or level of ownership.
The main objective of the unit tests is to take the smallest piece of verifiable software in the application, isolate it from the rest of the code and determine if it behaves exactly as we expect. Each unit is tested separately before integrating them into the components to test the interfaces between the units.
Unit tests must be written before (or very shortly after) writing a method; being the developers who create the class or the method, who design the test.
Thus, we managed to keep the focus on what the code should do, and it becomes a powerful tool to apply KISS, JIT, and keep the focus on what you have to do instead of how, avoiding introducing complexity without value.
Integration tests: From a test perspective, the individual units are integrated together to form larger components. In its simplest form, two units that have already been tested are combined into an integrated component and the interface between them is tested.
Integration tests – or components – identify problems that occur when units are combined. The new errors that arise are probably related to the interface between the units instead of within the units themselves; simplifying the task of finding and correcting defects.
Regression tests: Whenever changes are made to a project, the existing code may no longer work correctly or there may be errors not previously discovered. This type of error is called regression.
To detect these defects, the entire project must undergo a regression: a new full test of a modified program, instead of a test of only the modified units, to ensure that no errors have been introduced with the modifications.
As it can be deduced, this type of tests must be automated because it can be made up of tens or thousands of unit tests, of integration or more.
A less expensive version could be to build tests that replicate the actions that caused the regression, and verify that they have been corrected by not happening again the errors; in addition to adding the unit tests that ensure that the code that has corrected the regression works correctly.
Functionality tests: Automated or manual tests that test the functionalities of the application or module built from the point of view of the end user, with their different roles, to validate that the software does what it should and, above all, what has been done specified.
In its automatic version are tests that are automated to “save test time”. From the test cases of the manual tests, the test cases are automated so that they are repeated in the executions. These cases are usually the most important (happy flow) of the modules or business processes “vital” of the application. That is to say, the processes that always have to work and that under no circumstances can fail. The objective of automatic functional tests is to check that there are no regressions.
In the case of manuals, they are executed by a tester as if it were a user, but following a series of steps established in the test plan, designed in the analysis of the requirements to ensure that it does what it should (positive cases), that it does not fail (negative cases) and that is what has been requested.
The tester will perform the actions indicated in each step of the test case, verifying that the expected result is fulfilled. If the result is different, a defect will be reported in detail: description, data used, screenshots, etc., to facilitate the solution.
The biggest problem faced by functional tests to be automated is their fragility. Each test tests thousands of lines of code, hundreds of integrations in all tiers, and a changing user interface. The set of tests in relation to their definition, cost and maintenance is not sustainable.
Taking applications to their limits
We have already tested and deployed our application. Now comes the part of operations and must also automatically test the capabilities and weaknesses of the software and the platform on which it runs (infrastructure and dependencies), taking it to the limit, to check its availability, stability and resilience.
Stress tests: Small-scale tests, such as a single user running a web application or a database with only a handful of records, may not reveal problems that occur when the application is used in “real” conditions.
The stress test pushes the functional limits of a system. It is done by subjecting the system to extreme conditions, such as maximum data volumes or a large number of simultaneous users.
They are also used to take the system to collapse or degradation, check its continued operation above its limit and, once released from the load, assess its resilience capacity returning to its optimal state of operation.
And today the Cloud’s capabilities are increasingly being used to create a large number of users, distribute requests around the world, and obtain the processing, memory and storage resources needed in operations of this caliber.
Performance test: Determine the responsiveness, performance, reliability and / or scalability of a system under a given workload.
In web applications, performance tests are often closely related to stress tests, the measurement of delay and the responsiveness under a heavy load.
In other applications (desktop and mobile applications, for example), performance tests measure the speed and utilization of resources, such as disk space and memory.
If we store all the results of the performance tests for a period of time, we can know the state of health of the application, being able to obtain trends and forecasts of operation; and optimizing each deployment according to the necessary performance in each case.
Security tests: Validate the security services of an application and identify possible failures and weaknesses.
Many projects use a black box approach for security testing, which allows experts, without software knowledge, to test the application for holes, flaws, exploits and weaknesses.
And this is just the beginning
Functional, Usability, Exploratory, Acceptance, Infrastructure, etc. The universe of the tests is immense, being one of the branches of the wrong computer call, which requires a specific specialization.
And more when the arrival of the Infrastructure as a Code, automates and improves processes known as the State of Desired Configuration, adding business logic capabilities in the construction, maintenance and testing at the platform level.
Never forget: “Tests are not optional.”