The Guide for Time Series Data Projects is out.

Download now
Skip to content
Blog

On Writing Beautiful Tests

This article is more than 4 years old

My first project at Crate.io was to implement CSV support for COPY FROM statements. This enhancement shipped in CrateDB 3.0.

During that work, I wrote my first tests for CrateDB, and I started to think about the way that I approach testing.

Before becoming a software developer, I worked as a process engineer at a manufacturing plant.

Manufacturing plants have the potential to be dangerous places to work. Poisonous or corrosive chemicals, heavy industrial equipment, and confined spaces are just a few of the potential hazards that a process engineer must control.

If something goes wrong, the consequences can be catastrophic. There are numerous examples of how inadequate plant safety has cost lives.

Process engineers must continuously think about how processes and systems can fail and then implement the appropriate controls. This is necessary to maintain a safe and consistently-working environment.

It strikes me that software engineers have a similar sort of duty.

Usually, software failure is not catastrophic or life-threatening in the same way it can be in a manufacturing plant.

However, failures do have the potential to be both of these things. And in any case, as engineers, we should care about any failure.

All of this is a roundabout way of saying that when I moved from process engineering to software engineering, I took my skeptical engineering mindset with me.

Untested or hard-to-test systems make me uncomfortable.

Well-tested systems make me happy.

Why Testing Though?

Proper testing techniques help you think through the behavior, and consequently, the failure conditions of the system you are designing. This process, in turn, enables you to avoid, eliminate, or mitigate failure.

Furthermore, properly-tested code saves time in the long-run.

Although you may not have thought of every edge case that could be encountered, giving it some consideration means that your code will be more robust and you are less likely to have to revisit it when bugs appear.

Bugs will happen, no matter how well tested your code is. And debugging code that you (or someone else) has written is often costly and frustrating. A good chunk of time is typically required just to (re-)familiarize yourself with the code and build a mental model of how it is working so that you can start to fix the problem. This is something that well-written tests can help with also.

Tests also provide some reassurance when you want to refactor code. If you have a comprehensive test suite, it should alert you to any changes in behavior that your refactoring might've inadvertently introduced.

Testing Methods

There are many, many ways to test code. However, three of the most important methods that I tend to think about are:

  • Manual tests

    Testing that mimics the way an end-user would use the software, based upon established acceptance criteria, user stories, or something similar.

  • Integration tests

    Tests that validate that system components work together correctly.

    This sort of testing usually requires more work due to the complexity of testing multiple components at once. Because of this, integration tests are often written to test as much as possible with a small number of key scenarios.

  • Unit tests

    Tests that operate on individual components of the code.

    In my opinion, unit tests should be thorough and take into account every conceivable edge case.

Don't Write Your Tests Last!

I grimace a little bit when I hear someone say “I'm almost finished the implementation. I just need to write the tests.”

Internally, I am screaming “You have done things in the wrong order!”

If an engineer says this, I tend to assume one of the following:

  • This person don’t care about the tests. They are going through the motions of testing the software because they feel like they have to.

  • They are probably going to write the bare minimum number of tests, and they aren’t likely to consider edge cases.

  • This person is an inexperienced engineer because the robustness of their software is an afterthought.

Why do I take such a hard line on not testing code after it has been written?
Because if you’re writing tests after you have written the code, you are probably going to end up testing the code you have written, instead of testing the desired behavior of the system. Which is relatively pointless.

In my opinion, tests should model the desired behavior of a system, and the code should be written around the tests.

Doing it this way around helps to ensure that form follows function. Moreover, it helps to keep your architectural footprint small because you tend to write just enough code to make the tests pass.

I like to think about this as working on a problem outside-in, instead of inside-out.

Outside-In Approach

Working outside-in means starting with clearly-defined acceptance criteria. Using these, write a strategic set of integration tests, move on to unit tests that specify what the code should do, and then finally, write the code.

The primary benefit of starting from the outside is that it helps you focus on what it is you're trying to achieve, which can help to reduce the likelihood that you get lost in the details of the implementation.

Personally, I tend to have difficulty with maintaining focus and need to find ways of working which help me manage this. Working from the outside-in is systematic and helps me think clearly about what I'm working on.

Two additional development methods can help with this: behavior-driven development and test-driven development.

Behavior-Driven Development

Behavior-driven development (BDD) is a means for developers, testers, product owners, and other stakeholders to agree upon the user-value of a feature or story.

BDD uses scenarios (or user journeys) to map out how a user may interact with a feature and the outcomes of that interaction.

Ideally, you should write scenarios in clear, simple, non-technical language to eliminate sources of ambiguity and to maintain focus on the user, rather than implementation details. Gherkin syntax is a common way to achieve this.

The structure of a scenario typically flows like:

Given [these prior conditions]
When [this action is invoked]
Then [this outcome is expected]

Let's take the CrateDB feature I was working on as an example. A scenario for importing CSV data using a COPY FROM command might look like this:

Given I have created a table, my_table
And I would like to import a CSV file, file:///example.csv, with the
contents:
    Code,Country
    IRL,Ireland
When I execute:
    COPY my_table FROM file:///example.csv
Then my_table has two columns ‘Code’ and ‘Country’
And those columns have a row ‘IRL’ and ‘Ireland’, respectively

This scenario is written in plain language, which means it should be understood both by technical and non-technical people.

In addition to this, I find that describing scenarios this way makes it relatively straightforward to explore the consideration-worthy ways a user might interact with the system, which, in turn, helps me to find edge cases.

Moreover, the result of this process is a collection of scenarios that clearly define the acceptance criteria and design assumptions for future reference.

Test-Driven Development

BDD has its origins in test-driven development (TDD), but that doesn't mean you have to choose one over the other. They are complementary approaches.

If you do BDD first, you can use the resulting acceptance criteria to write your lower level integration tests, and subsequently, your unit tests.

Typically, if you are a strict TDD adherent, you will start with the integration tests and create a number of scenarios that test the intended behavior of the system.

Next, you will progress to the unit tests. You will write the simplest test case you can think of, run that test to demonstrate that it fails, and then subsequently write the implementation which will make it pass.

You will then move on to the next test, and repeat this process to incrementally build up the code so that it completely satisfies the acceptance criteria.

At the end of this process, when all unit tests and implementation code is written, it is expected that the integration tests will pass (although some tweaking of their implementation may be required).

When working as a process engineer, I would sometimes consult the industry standard hierarchy of hazard controls. This hierarchy ranks hazard controls from the most to the least effective, like so:

  1. Eliminate the hazard
  2. Replace the hazard with something less hazardous
  3. Isolate the hazard
  4. Change the way people work
  5. Protect the individual against the hazard (usually through personal protective equipment)

If you write software tests first, you can hopefully jump straight to the most effective method for managing the likelihood of failure: eliminate the hazard. That is, you can design the problem out of your system before it finds a way in.

Communicative Code

In my opinion, it is not sufficient to write tests. You should also aim to write code that is easy to understand.

Computer code is, unsurprisingly, a set of instructions that tell a computer what to do. However, in another more important sense, it is a way of communicating with other human beings about how a problem was solved at one point in time. Even if that other human being is just a future version of yourself.

Over the total lifecycle of a piece of software, it is common to spend more time reading code than writing it. For that reason, I think it's important to make the code as easy to understand for a human being as possible. This reduces the amount of time needed to properly wrap your head around the code, which arguably leads to fewer bugs and happier programmers.

Because of this, I try to write tests which are communicative and act as a form of self-documenting code. And my primary tools for this are well-chosen test method names and test functionality abstraction.

Test Method Names

My preferred way to write test method names is to include three elements:

  1. The name of the method they are testing
  2. The setup
  3. The expected outcome

Take the following test method:

public void processToStream_givenFileIsEmpty_thenSkipsFile()

We can break the name down like so:

  1. processToStreamWe are testing the processToStream method.
  2. givenFileIsEmptyThe method is given an empty file.
  3. thenSkipsFileThe method should skip the file.

When you name test methods like this, you can quickly scan the test methods to get an idea of what is being tested, without having to read the method definitions.

Test Functionality Abstraction

When I want to understand the behavior of some code quickly, I usually jump straight to the tests.

Tests act as a form of living documentation since the code should not be able to be updated without updating the tests. On the other hand, plain language documentation and even (maybe especially!) code comments can quickly become outdated, and in any case, their practical usefulness can vary quite a bit.

One of the things that helps to make tests easy to understand is when the functionality is abstracted away behind well-named methods.

For example, let's look at the full definition of the method we used before:

@Test
public void processToStream_givenFileIsEmpty_thenSkipsFile() throws IOException {
   	givenFileIsEmpty();
  	
   	whenProcessToStreamIsCalled();

   	thenDoesNotWriteToOutputStream();
}

If the reader wants to know more about any one of these steps, she can look up the corresponding definition.

Let's look at them ourselves.

The first method mocks a Reader object that returns null when a line is read:

private void givenFileIsEmpty() throws IOException {
   when(sourceReader.readLine()).thenReturn(null);
}

The second method invokes the method being tested:

private void whenProcessToStreamIsCalled() throws IOException {
   	subjectUnderTest.processToStream();
}

The third method verifies the expected outcome, which in our case is an output stream that has not been written to:

private void thenDoesNotWriteToOutputStream() {
   	verify(outputStream, times(0)).write(NEW_LINE);
}

I have intentionally chosen a simple example here—one which perhaps does not require this level of abstraction—in the hopes that it is easy to understand. This approach offers more benefits when the tests are more complicated. You can check out the code I pulled this from if you want to see what that looks like.

Conclusion

Well-thought-out testing is essential, and in my opinion, one of the hallmarks of an experienced engineer.

Good engineering requires you to:

  • Consider edge cases and how your code may be vulnerable to failure.

  • Write tests so that they are understandable and valuable to future readers and maintainers of the code.

  • Structure your tests so that there’s a strategic and definitive set of pass-and-fail criteria at every significant level of the system.

Testing and thinking about modes of failure as you design code will help you to write more robust software, which is, in turn, easier to maintain. It also makes your code easier to refactor and easier to modify.

If those tests are written well, they adequately document the code and make it easier to jump into the code and get familiar. This is something that is especially important for code that is developed collaboratively. Whether that's an internal team or something like an open source project that depends on lowering barriers for code contribution.

In many respects, your test suite is more valuable than your implementation, because it specifies in detail what the implementation should do. One way of thinking about it that if for some reason, your implementation was lost, you should be able to recreate it faithfully from the tests.

P.S. Before I wrap this up, I would like to give a shout-out to someone who helped me tremendously when I was first starting out as a software engineer. Thank you, David Inkpen, for instilling me with an appreciation of software testing and sharing techniques that have helped me write beautiful (yes—beautiful!) tests.