DRMacIver's Notebook
Some Theory of Test-Case Reduction
Some Theory of Test-Case Reduction
There’s a bunch of test-case reduction theory that mostly lives in my head and currently-abandoned paper drafts. I’m doing a week of PhD related writing so I thought it would be good to write up a dump of these ideas here.
A test-case reducer is something that is designed to take an interesting test case and make it “simpler” in some sense while retaining the property of interest.
Formally, a test-case reducer on some set of test-cases \(T\) is a function \(\{r : \{(U, x) : x \in U \subseteq T\} \to T\) such that \(r(U, x) \in U\). The set \(\{(U, x) : x \in U \subseteq T\}\) is called the set of reduction problems on \(T\).
The first argument to the reducer is the interestingness test (the set of interesting test cases) and the second is some interesting test case that we want to reduce.
A partial order \(\preceq\) is a reduction order for \(r\) (or \(r\) is a reducer for \(\preceq\)) if \(r(U, x) \preceq x\). The reduction order captures what we mean by saying that a reducer makes the test case “simpler”. A test case is simpler if it’s smaller in the reduction order.
(It’s hopefully faily uncontroversial that the “simpler” relationship should be a partial order - it’s basically to avoid cycles where sometimes \(x\) reduces to \(y\) and sometimes \(y\) reduces to \(x\) depending on the interestingness test. It’s easy to construct reducers that do have these cycles, so not every reducer has a reduction order, but generally I think those reducers are badly designed)
As I mentioned in Basic Reducer Design I generally prefer the reduction order to be a total order. There’s no reason it has to be, but there’s also no reason for it not to be - any partial order can be extended to a total order, and if a function is a reducer for some partial order then it’s a reducer for any extension of it.
One reason to prefer a total order is that if we want a test-case reducer to be normalizing (in the sense Groce et al. propose in One test to rule them all, that is its output does not depend on the initial test case) then it must be a total order because for any \(x, y\) we must have \(r(\{x, y\}, \cdot) = y\) or \(r(\{x, y\}, \cdot) = x\), so either \(x \preceq y\) or \(y \preceq x\).
One nice thing about reduction orders is that they guarantee that reducers compose nicely without undoing each other’s work. Given reducers \(r_1, r_2\) we can define a composite reducer straightforwardly as \((r_1 \cdot r_2)(U, x) = r_1(U, r_2(U, x))\) - we apply \(r_2\) then we apply \(r_1\). If \(\preceq\) is a reduction order for both of \(r_1\) and \(r_2\) then it also is for \(r_1 \cdot r_2\).
Without that common reduction order, you might end up in an annoying situation where \(r_2(U, x) = y\) and \(r_1(U, y) = x\). Note that you can have this even if \(r_1\) and \(r_2\) each have a reduction order but it’s not the same one.
One thing to consider is when a reducer is “done”. We say a reduction problem \((U, x)\) is fully reduced for \(r\) if \(r(U, x) = x\), and a reducer is idempotent if \((U, r(U, x))\) is fully reduced - i.e. running the reducer twice doesn’t do any more than running the reducer once.
Another nice feature of reduction orders is that they guarantee that you can always iterate a reducer to an idempotent fixed point (assuming the reduction order is reasonable. The technical condition we require here is well-foundedness but in basically every case of interest the order has the much stronger property that \(\{y: y \preceq x\}\) is finite for any \(x\)). Given a reducer \(r\) you can define the reducer \(r^+\) where \(r^+(U, x)\) is defined by iterating the sequence \(x_0 = x\), \(x_{n + 1} = r(U, x_n)\) until you reach a fixed point. Without a reduction order you can get cycles in this sequence so it may never terminate.
You can also think of \(r^+\) as the fixed point of \(r\) under composition. i.e. \(r^+ = r \cdot r^+\), so we call \(r^+\) the fixation of \(r\).
By the way, these two constructs of reducer composition and fixation are not particularly abstract but are actually how reducers are built in practice. You can think of your typical reducer as \((r_1 \cdot \ldots \cdot r_n\)^+$ where the \(r_i\) are individual simpler reducers that are typically called reduction passes.
Generally speaking for a reducer we’re most interested in the behaviour of \(r^+\) - what happens when you run the reducer to fixation - but it’s often useful to consider reducers that are not necessarily run to fixation because of this pass structure. Anecdotally (and for solid heuristic reasons) it’s generally better not to run each pass to fixation before running the next one: After running a reducer once it’s typically “pretty close” to fixation, and running it again is likely to be wasted effort, while if other passes have run it may have been perturbed enough that more work is useful.
Reasoning in terms of the fully reduced problems of a reducer also gives us another useful notion, which is that of reducer strength. We can say \(r_1\) is at least as strong as \(r_2\) if every reduction problem that is fully reduced for \(r_1\) is also fully reduced for \(r_2\) (and stronger if also there are some reduction problems that are fully reduced for \(r_2\) that are not for \(r_1\)).
This is a slightly weaker notion of strength than some we might consider. It’s possible for \(r_1\) to be stronger than \(r_2\) but to sometimes have \(r_2(U, x) \prec r_1(U, x)\), because \(r_1\) took a path that causes it to get stuck at a worse point.
Under this notion of strength, \(r_1 \cdot r_2\) is at least as strong as each of \(r_1\) and \(r_2\), and \(r^+\) is equally strong to \(r\).