The Tangled Terms of Testing


When it comes to testing, most developers admit they probably don’t do as much as they should. Developers, testers and even end users get blocked for various reasons, but one of the initial reasons is becoming overwhelmed by the various terms and approaches.

Buzzwords abound, including phrases like: black box, white box, unit, incremental integration, integration, functional, system, end-to-end, sanity, regression, acceptance, load, stress, performance, usability, install/uninstall, security, compatibility, comparison, alpha and beta.

Using testing related terms can even influence developers to do the opposite of what they need to. Getting a handle on what exactly “unit testing” means can help break through the quagmire and move on to being more productive. The two subjects being covered here are “levels” of testing and automated/build-time testing.

Levels of Testing

First, it helps to group a few related terms and lay out their basics. If you group testing by levels, it can be thought of as a progression from unit testing through integration testing, system testing, and then finally to acceptance testing. The process starts with unit tests, which cover individual parts (functions, classes, etc.), and ends with acceptance tests to verify the software is doing what was promised. Ideally, developers should be writing tests and performing unit testing while they are writing the initial code.

On the other hand, portions of system and most acceptance testing would normally fall to QA departments. Of course, with smaller projects end users or developers sometimes play the role of QA. However, it is important for them to keep in mind that they are fulfilling a different role at the time, and act accordingly.

The Tangled Terms of Testing - testing-chain

The different levels/types are usually not completely separate as there is some ambiguity, and trying to pin down the fluid reality to rigid definitions can easily waste time and give poor results. With that said, there are some general descriptions of these levels that can help get a feel for what a test may or may not count as. More importantly, keeping the different levels in mind can help to write better, more efficient tests.

Unit tests are generally the starting point and should be done by developers writing any code. They should be small, focused, and require minimal setup. They also shouldn’t perturb the system they are being run on. If one has to execute a test as root or an administrator, then it is probably not a unit test.

Integration tests combine different parts together, however they normally don’t require an entire product to be assembled. They may or may not affect the system they are being run on, and thus start to blur the line between what might fall to build-time tests or developer-written tests.

System tests are usually a little easier to spot. The main thing that distinguishes a system test is that it tests an entire product as a whole and normally requires installation of the software that is being tested.

Acceptance tests check for end-to-end functionality and user experiences. The distinction between a system test and an acceptance test might be hard to spot, or might not really exist at all.

However, in a more formalized development environment they are tied to meeting exit criteria or declaring releases. Frequently, engineering groups will do their own acceptance testing before declaring sufficient quality to pass on to QA who will then begin their own set of acceptance tests.

Automated Unit Testing

The concept of automated unit testing during builds is one that’s probably the most important for faster and better quality development time and again. The general idea is to have a set of tests that are run automatically each time the software is built. Thus, if a developer changes something in one place that causes problems in other places, the automated build tests will catch things when they are first checked in, or more often before they even have a chance to be committed. The cost to fix is much lower in both time and effort (and money too) if caught early, and gets more expensive the longer problems go unresolved.

This is the area that normally is at the unit test level, but can extend somewhat into the realm of integration tests. The key is that such automated unit tests should be quick to build and execute. Each developer has these tests run every time they build the system, as well as any framework or build machines, such as those running some continuous integration solution. If certain tests start to take too long to build or run, I often suggest running a secondary build to cover extended testing. This falls somewhere between basic build-time unit testing and full acceptance testing, and could perhaps be done once a day as opposed to during every build. From my own experience, a rule that works well is to ensure such testing is executed at least once a week.


I was surprised the first time I heard of technical directors and software architects instructing developers not to write unit tests since they “should be written by QA engineers, not developers.” However, such cases are usually a result of misunderstandings, like not realizing unit tests are not synonymous to acceptance tests. Clarifying terminology ends up saving a lot of time while simultaneously boosting software quality. Understanding what unit tests are and how they should be used is something all developers and testers should know.

Now that I’ve defined these terms in order to establish a common understanding, the next installment in this blog series will go into details about structuring and writing better unit tests.

Author: Jon Cruz

Jon is an Inkscape core developer and board member, participates as a mentor in Google's Summer of Code, and contributes to Wayland.

One thought on “The Tangled Terms of Testing”

Comments are closed.