DRMacIver's Notebook
A round-tripping problem with expected utility theory
A round-tripping problem with expected utility theory
For reasons, I’ve been thinking about subjective expected utility theory and the Von Neumann-Morgenstern utility theorem (“the VNM theorem” for short) recently. I have some significant philosophical objections to subjective expected utility and I’m going to explore a few of them in a series of notebook posts. This post is about an interesting thing I noticed while trying to articulate those objections. I don’t necessarily expect it to persuade anyone of anything.
The problem is this: The VNM theorem loses information. You cannot actually reconstruct the original decision procedure from the output of the VNM theorem, because the VNM theorem’s output isn’t quite what it’s usually framed as.
The VNM theorem is as follows: Suppose you have a set of outcomes \(\mathcal{O} = \{O_1, \ldots, O_n\}\). A lottery \(L\) is a random variable in \(\mathcal{O}\).
An agent has preferences over lotteries, expressed by a preorder \(\preceq\) and an indifference relation \(\sim\). That is \(L \preceq L'\) means that \(L'\) is at least as good as \(L\), and \(L \sim L'\) means the agent is indifferent between them.
The VNM theorem is that under certain reasonableness conditions, this preorder must be expressible in terms of a utility function \(u: \mathcal{O} \to \mathbb{R}\), such that \(L \preceq L'\) if and only if \(E(u(L)) \leq E(u(L'))\). i.e. we prefer a lottery if it has a higher expected utility, and are indifferent between two lotteries with the same expected utility.
It’s instructive to sketch the proof of this theorem:
Because we have a total order over outcomes, there is a best outcome and a worst outcome. Call them \(B\) and \(W\) respectively. If \(B \sim W\) then we’re necessarily indifferent between all lotteries, so assume \(W \prec B\). Now, consider the family of lotteries \(M_p\) such that \(M_p\) is \(B\) with probability \(p\) and \(W\) with probability \(1 - p\). So \(M_1 = B\) and \(M_0 = W\).
Let \(L\) be some lottery. We must have \(M_0 = W \preceq L \preceq B = M_1\). Therefore (by the hypotheses of the theorem) we can find some \(p\) in the middle such that \(M_p \sim L\). Some computation shows that this number \(p\) behaves like an expected value.
Thus the VNM theorem gives us the expected utility of an arbitrary lottery. Right?
Except, it doesn’t. It guarantees that this expected utility exists, but it does not actually give us a way to compute it. The hypothesis of the VNM theorem is that there is some point at which the switchover to indifference happens, but we are not given an oracle which computes that probability, only a guarantee of its existence.
Instead what we have to do is turn our oracle for preferences into a sort of mathematical root finding algorithm to find increasingly fine grained approximations to \(p\). We construct a sequence of approximations \(q_k \leq p \leq r_k\) by setting \(q_0 = 0\), \(r_0 = 1\), then to calculate \(q_{k+1}, r_{k+q}\) we set \(s_k = \frac{q_k + r_k}{2}\) and then asking whether \(M_{s_k} \preceq L\). If yes, we set \(q_{k + 1} = r_k, r_{k + 1} = s_k\), if no we set \(q_{k + 1} = s_k, r_{k + 1} = r_k\).
There is no guarantee that this process will ever terminate - the numbers for which it terminates are precisely the dyadic rationals, the integers divided by some power of two, so for example if \(p = \frac{1}{3}\) it will go on forever, producing increasingly good approximations. You can design other divide and conquer schemes - there’s nothing special about this particular one - but they will all have this problem of there being some utility values that this doesn’t converge to.
(Honesty compels me to admit that this problem goes away if you insist that probabilities and utilities have to be rational numbers, because you can design a divide and conquer scheme that just enumerates all of the rationals between the current bounds. For technical reasons that I will not go into here I find this a fundamentally unsatisfying conclusion, but the short version is that the real numbers are a better approximation to the problems of physical measurement than the rational numbers are)
Which is fine, in practical terms you will very rarely need to know your utility to the last decimal place, but the interesting thing to note is this: Given access to a procedure that provides these increasingly fine grained approximations to the expected utility, you cannot actually recreate the original decision procedure with a process that is guaranteed to terminate.
The problem is this: Suppose we have some procedure that calculates a relationship \(\preceq'\) based on such converging approximate utilities. We want \(\preceq'\) to be identical with \(\preceq\), so lets assume we’ve succeeded and see if this leads to a contradiction.
Let \(p\) be some number where the process doesn’t terminate and ask whether \(M_p \preceq' M_p\) (using the same \(M_p\) as above). If \(\preceq'\) agrees with \(\preceq\) we will get the correct answer that \(M_p \sim' M_p\).
Now, look at how many steps of the approximation you had to compute in order to get to that point, say \(k\). This gives us an interval \(q_k \leq p \leq r_k\).
Now at this point we decided that \(M_p \sim' M_p\) based on no information other than that \(p \in [q_k, r_k]\). Therefore for any other \(s \in [q_k, r_k]\) we would also decide that \(M_p \sim M_s\) (because we can’t tell based on what we’ve computed that \(p \neq s\)). But in our original relationship we must have \(M_p \prec M_s\) or \(M_s \prec M_p\), because their utilities are \(p\) and \(s\) respectively. Therefore we have failed to recreate \(\preceq\) from the output of the VNM theorem.
Why is this interesting?
Well, mostly it’s interesting because this looks exactly like my normal objections to the VNM theorem, but normally I justify it on practical grounds: You run into Fredkin’s paradox (a term I’ve only recently learned - I’ve always thought of this as the Buridan’s ass problem), where the amount of work you have to do to decide goes to infinity as you approach points of indifference. From a practical viewpoint where you have all sorts of measurement and approximation problems, this is (to me) obviously a problem with the theory, which is generally handwaved away by the setup: The VNM theorem basically starts by saying “OK so first assume you’re omniscient…” which is what allows it to start by assuming that you have this decision procedure.
The interesting thing about this result is that the VNM theorem runs into this problem even on its own terms, without introducing any additional assumptions of physical reasonableness (unless you count “cannot run a literally infinite amount of computation”). The VNM theorem starts from an omniscient decision procedure and constructs a utility function, but you cannot run that process backwards from the utility function to such a decision procedure because of precisely the sorts of limitations that meant that you couldn’t really have constructed that procedure in the first place.