Testing, Continuously

John Jacobsen

This is the fourth post in a series about Clojure workflows.

In my last post, I laid out a workflow based on TDD but with the addition of “literate programming” (writing prose interleaved with code) and experimentation at the REPL. Here I dive a bit deeper into my test setup.

Even before I started developing in Clojure full time, I discovered that creating a configuration that provides near-instant test feedback made me more efficient as a developer. I expect to be able to run all relevant tests every time I hit the “Save” button on any file in my project, and to see the results within “a few” (preferrably, less than 3-4) seconds. Some examples of systems which provide this are:

  1. Ruby-based Guard, commonly used in the Rails community;
  2. Conttest, a language-agnostic Python-based test runner I wrote;
  3. in the Clojure sphere:
    1. Midje, using the :autotest option;
    2. Expectations, using the autoexpect plugin for Leiningen;
    3. Speclj, using the -a option;
    4. clojure.test, with quickie (and possibly other) plugins

I used to use Expectations; then, for a long time I liked Midje for its rich DSL and the ability to develop functionality bottom-up or top-down. Now, I pretty much just use Speclj because it’s autotest reloader is the most reliable and trouble-free.

(Earlier versions of this post had details for setting up Midje, now omitted — see the Speclj docs to get started.)

Running your tests hundreds of times per day not only reduces debugging time (you generally can figure out exactly what you broke much easier when the deltas to the code since the last successful test are small), but they also help build knowledge of what parts of the code run slowly. If the tests start taking longer than a few seconds to run, I like to give some thought to what could be improved — either I am focusing my testing effort at too high of a level (e.g. hitting the database, starting/stopping subprocesses, etc.) or I have made some questionable assumptions about performance somewhere.

Sometimes, however, despite best efforts, the tests still take longer than a few seconds to run. At this point I start annotating tests with metadata indicating that those tests should be skipped during autotesting (I still run the full test suite before committing (usually) or pushing to master (always)). In this way, I can continue to get the most feedback in real time as possible — which helps me develop quality code efficiently. The exact mechanism for skipping slow tests depends on the test library you’re using; I haven’t had to do it with Speclj yet.

In the next post, we’ll switch gears and talk about literate programming with Marginalia.

about|all posts

© 2016 John Jacobsen. Created with unmark. CSS by Tufte-CSS.