TITLE: Getting Started With Unit Testing
AUTHOR: Eugene Wallingford
DATE: July 17, 2017 2:35 PM
DESC:
-----
BODY:
Someone wrote on the Software Carpentry mailing list:
I'm using Python's unittest framework and everything
is already set up. The specific problem I need help
with is what tests to write...
That is, indeed, a problem. I have the tool. What now?
Rather than snark off-line, like me, Titus Brown wrote
a helpful answer
with generic advice for how to get started writing tests for code
that is already written, aimed at scientists relatively new to
software development. It boils down to four suggestions:
- Write "smoke" tests that determine whether the program works
as intended.
- Write a series of basic tests for edge cases.
- As you add new code to the program, write tests for it at the
same time.
- "Whenever you discover a bug, write a test against that bug
before fixing it."
Brown says that the smoke tests "should be as dumb and robust as
possible". They deliver a lot of value for the little effort.
I would add that they also get you in the rhythm of writing tests
without a huge amount of thought necessary. That rhythm is most
helpful as you start to tackle the tougher cases.
Brown calls his last bullet "stupidity driven testing", which is
a self-deprecating way to describe a locality phenomenon that many
of us have observed in our programs: code in which we've found
errors is often likely to contain other errors. Adding tests to
this part of the program helps the test suite to evolve to protect
a potentially weak part of the codebase.
He also recommends a simple practice for the third bullet that I
have found helpful for both bullets three and four: When you
write these tests, try to cover some of the existing, working
functionality, too. Whenever I add a new test to the growing
test base, I try to add one or two more tests not called for by
the new code or the bug I'm fixing. I realize that this may
distract a bit from the task at hand, but it's a low-cost way to
grow the test suite without setting aside dedicated time. I try
to keep these add-on tests reasonably close to the part of the
program I'm adding or fixing. This allows me to benefit from the
thinking I'm already doing about that part of the program.
At some point, it's good to take a look at code coverage to see
if there are parts of the code that con't yet have tests written
for them. Those parts of the program can be the focus of
dedicated test-writing sessions as time permits.
Brown also gives a piece of advice that seasoned developers should
already know: make the tests part of continuous integration. They
should run every time we update the application. This keeps us
honest and our code clean.
All in all, this is pretty good advice. That's not surprising.
I usually learn something from Brown's writing.
-----