DRMacIver's Notebook

Easy parallel test-case reduction

Easy parallel test-case reduction

As part of my current work in Hypothesis I’m redesigning the reducer to be very fine grained. Rather than having a coarse reduction pass that may run many operations, each shrink pass is explicitly written as a function that generates a set of small-step reduction steps.

Each of these steps can be run on any test case, and is designed to run only a very small number of operations (typically they are adaptive passes that run \(O(1)\) test invocations in the event that they make no progress or \(O(\log(n))\) if they make progress).

An example of this would be that instead of a greedy pass that tries deleting each of \(N\) elements, it generates \(N\) steps where step \(i\) tries deleting the element at position \(i\) (actually a contiguous region around that element, which is where the adaptiveness comes in).

The idea is that logically the original coarse grained shrink pass is equivalent to generating all of the steps then running all of them in turn.

The reason for doing it this way is twofold:

  1. It makes it easy to run operations for all reduction passes in a random order, which is often useful.
  2. It lets you interleave the operations of two different reduction passes - this is particularly useful because often reduction passes get “stuck”, and having other passes running at the same time at them means that you make progress anyway (which reduces the amount of work that the stuck pass has to do, so even if you run it to completion it takes less time than it would have if you’d done it all at once).

However, the thing I noticed earlier is that it also makes it very easy to run the operations in parallel!

The way this works is very simple:

  1. Maintain a single globally best known result.
  2. In some arbitrary order run each of the shrink steps with as much parallelism as you like.
  3. Each step takes the globally best known result at the beginning of its run, performs some test cases, and possibly finds a better test case.
  4. If a step does find a better test case, it attempts to atomically update the globally best test case as follows:
    1. If it hasn’t changed since the start of its run, it just updates it, no problem.
    2. If it has changed since the start of its run but the value it currently has is worse than the one found, just update it.
    3. If it has changed since the start of its run and the new best value is better than the one found, restart the current shrink step (which has proven likely worth running) on the new global best.

In the case where the test case is already fully reduced, this is embarrassingly parallel. In the case where everything makes progress, this is quite heavily contended, but also everything is making progress which tends to be the fast path for a test case reducer. It’s probably worth scaling back the concurrency in this case.

My suspicion is that in most cases this will run quite close to the embarassingly parallel version - prior art on parallelising test case reducers (e.g. C-Reduce, halfempty) has based its concurrency on an assumption that test cases all fail.

One thing that is worth noting is that this parallel algorithm can cause some “dropped” work in the following sense: A and B run in parallel and both succeed, but A completes before B and B overwrites its work. If B ran again it would make further progress. Fortunately, this is actually fine, because in test case reduction we will iterate to a fixed point, so anything we drop will get an opportunity to run the next time, because this kind of dropped work only occurs when we’re failing to make progress, and whenever we succeed at making progress we know we’ll get another go.