Friday, June 21, 2013

5 common problems of (our) test code

So hopefully everyone's writing (unit-)tests for their code these days. Good! If you are one of us, you probably know that there are good as well as bad tests. At the end of the day it's just code like any other. If you don't pay attention you can write some really bad test code which will make your life miserable in the long run.

I have been writing tests for most of my code for years now and did thousands of code reviews for my teammates. I'm also one of the guys who runs Coderetreat events in the Czech Republic so I have probably seen it all. Good stuff as well as bad stuff. :)

So without further ado, here is the list:

Test is not being executed at all

This problem is common when you write the test after you are done writing your production code. It happens when you place test file in a wrong directory, you forget test annotation required by your test framework or there is some other reason that your test is not visible to the test runner. This problem is pretty hard to spot because everything seems fine, and all the tests are passing.

This comes with all the benefits of dead code hanging around. It will eventually get out of sync with the production code, and person who will encounter it will probably get really confused. It's even worse when combined with other smells.

Solution: Make sure that you see your test fail! Just break your production code and see that it will get noticed by your test suite when you run it. This way you will ever commit test which is not being executed. If your development environment differs from others (e.g. CI build server environment) make sure that your test will actually get executed everywhere it needs to run.

Too generic test names

Sometimes, when programmers are being lazy, they will not take the time to name the tests appropriately.

Example:

sub test_foo : Tests {
    is(foo(2, 2), 4);
}

This name just says that it's hopefully going to test foo function somehow. There is no way for the reader to tell whether the actual test makes sense or even what's the purpose of the foo method. Does it sum the numbers? Multiply them? Or does it simply returns number 4 for any given input? Good test names should make it clear what's being tested, and they should also reveal purpose of the production code (which should also use good naming).

Is it hard to come up with good name for given test? Your production code may be the problem. Maybe your code does too much or you really don't have an idea what it should actually do? Listen to your tests while you are writing them and fix the problem right away.

Improved example:

# better
sub test_foo__returns_sum_of_given_numbers {
    is(foo(2, 2), 4);
}

# Test::Spec
describe "foo" => sub {
    it "returns sum of given numbers" => sub {
        is(foo(2, 2), 4);
    };

    it "returns 0 for empty input" => sub { ... }
};

Some test frameworks make writing good test names easier than others. Examples are RSpec for Ruby, Test::Spec for Perl, ...

Solution: Just ask yourself whether it would be possible to dump existing production code, and rewrite it from scratch just by looking at the tests.

Unhelpful failure messages

Oh man how I love when I break some test, and I get error message which tells me that there might or might not be a problem. Somewhere.

Solution: When you are writing test, make it fail, and check the error message. Is it obvious from the message what went wrong, and does it point you in the right direction? No? Chances are that you might improve your test code so that it fails in more helpful way. It might force you to write different test or even alter your production code. But that's OK, and it's definitely worth it.

Setup data that doesn't affect test result

Most of the tests require some setup data. Chances are that in order to create this setup data you need even more data which is unrelated to given test. Let's look at this example:

sub order_is_not_valid_when_customers_email_is_invalid : Tests {
    my $customer = Customer->new(
       name => 'John Doe',
       email => 'invalid-email.com'
    };

    my $order = Order->new(
       customer => $customer
    );

    ok not $order->is_valid;
}

We are testing Order but it requires Customer. In order to create Customer we have to supply name but production code doesn't care about the name.

Solution: Move unrelated stuff out of the test. Use builders and variables with good names defined outside of the test. This will not only make your test more readable but also less fragile as it follows DRY principle.

sub order_is_not_valid_when_customers_email_is_invalid : Tests {
    my $order = Order->new(
       customer => build_customer(email => $any_invalid_email)
    );

    ok not $order->is_valid;
}

Magic values

Test code should follow same principles as clean production code. So it should not contain magic values.

Solution: Put your test data into variables with good names so that they reveal intent clearly. E.g. use $any_invalid_email instead of '%#$@%$@#%@#$'.

Conclusion

In order to get most out of your test suite you should keep it in a good shape. Tests should be there to make future changes of your software easier. If they make your life harder while you are fixing bugs or introducing new features, chances are that your test suite has a problem and you should do something about it. Or it may be that you didn't listen to your tests carefully enough while you were writing them and you production code sucks now.

In this post I wanted to share common problems of our test code. But there is a lot more to writing useful test suites. If you want to learn about this stuff I recommend books "Test-Driven Development By Example", "Growing Object-Oriented Software Guided by Tests" or "The Art of Unit Testing".

If you speak Czech, then you might be interested in articles about TDD by Daniel Kolman or Aleš Roubíček. You might also want to attend future CodeRetreat.cz events where we practice (not only) TDD.

No comments:

Post a Comment