DRMacIver's Notebook
Three key problems with Von-Neumann Morgenstern Utility Theory
Three key problems with Von-Neumann Morgenstern Utility Theory
People semi-routinely argue about Von-Neumann Morgenstern Utility Theory with me. This rarely goes well - it doesn’t change any minds and I don’t enjoy it - so I am writing up what I think the core of my objections once and for all so that I can stop having this argument.
I don’t really expect it to convince you, but also I’m not going to argue with you on this subject until you’ve read it and can tell me a good reason why you disagree.
I am not going to argue that VNM theory is never useful, nor that utilities in general are not. I am going to argue that VNM theory is an overly idealised model of agent behaviour that if you try to apply it to a general purpose agent fails in crucial ways.
A brief recap
VNM theory works as follows: You have a finite set of outcomes \(O_1, \ldots, O_n\), you are given lotteries which are some precisely known mixture of these outcomes, and you have the ability to make binary choices between these lotteries, where given any two lotteries you should either strictly prefer one of them or be indifferent between the two.
Under some allegedly reasonable assumptions, for any of these outcomes \(O_i\), you should be indifferent between it and some lottery \(p B + (1 - p) W\), where \(B\) is the (possibly joint) best outcome of the \(O_i\) and \(W\) is the (possibly joint) worst outcome. This \(p\) can then be considered the utility of \(O_i\) and under a bunch more (mostly actually reasonable) assumptions you should be making your choice to maximise the expected utility of lotteries.
Fredkin’s paradox, the doom of VNM theory
I’ve written about two key issues with this setup before:
- Physical agents can’t implement discontinuous functions.
- You may have to do an infinite amount of computation to get that utility.
Both of these are variations on a key problem with VNM theory: It ignores the cost of decision making, which comes up as Fredkin’s paradox - the difficulty of making a decision between two very similar options is much greater than the difficulty when one is much better than the other.
The first part is about physical difficulties with measurement - you can only know the probabilities up to some finite precision. VNM theory handwaves this away by saying that the probabilities are perfectly known, but this doesn’t help you because that just moves the problem to be a computational one, and requires you to be able to solve the halting problem. e.g. choose between \(L_1 = p B + (1-p) W\) and \(L_2 = q B + (1-q) W\) where \(p = 0.0\ldots\) until machine \(M1\) halts and \(1\) after and \(q\) is the same but for machine \(M2\).
The second demonstrates that what you get out of the VNM theorem is not a utility function. It is an algorithm that produces a sequence converging to a utility function, and you cannot recreate even the original decision procedure from that sequence without being able to take the limit (which requires running an infinite computation, again giving you the ability to solve the halting problem) near the boundary.
Both of these have the same core problem: In order to be able to identify the decision boundary that is essential for the VNM theory to work, you have to be omniscient. If you accept the flawed premise of knowing the probabilities with perfect knowledge, you can get away with only logical omniscience, but in order to know the probabilities with perfect knowledge you also have to be actually omniscient and not constrained by mere mortal abilities like having to observe the world to acquire knowledge of it.
Is this a nitpicky objection? Kindof, but I think it’s important, for two reasons:
- It does fairly directly demonstrate why the central thing required for VNM theory to work cannot possibly work.
- It does so in a way that shows that the model of VNM theory fails precisely because it fails to capture the behaviour of actually reasonable agents.
The correct thing to do with Fredkin’s paradox is to not care about it. An actual agent faced with anything like the VNM setup will choose non-deterministically in the region of indifference, with much of the nondeterminism coming from imprecision in their measurements. When your two alternatives are very close together, your preference will be more or less like a coin toss, when your two alternatives are very far apart it will be close to (or actually deterministic), but the continuity argument shows that you have to have intermediate values - e.g. sometimes you’ll pick \(A\) over \(B\) about 60% of the time.
Bounded and known preferences
The VNM setup uses the finite number of options to ensure that there is a best and worst outcome, and calculates your utility in terms of those best and worst options. Additionally it assumes that you know exactly how bad literally the worst thing that can possibly happen is and how good literally the best thing that can possibly happen is.
This means that VNM theory can only be useful in very well understood bounded domains. Getting to the point where you can even start running the computation for utility of somethign requires \(O(n)\) work to figure out the best and worst thing, you need to be familiar enough with all the options to have real preferences over them, and also there just can’t be that many options.
For example, VNM theory is very poorly suited to dealing with anything to do with money, which has unbounded utility. This is pretty impressive going for a normative utility theory. Good job failing to handle literally the most utility like problem.
You could argue that there is a maximum amount of money that matters and for many bounded domain cases that’s true and reasonable, but if that maximum amount is “literally the entire world economy”, I argue that even if you treat this as the best possible thing, you have really very little conception of how good it is, and cannot possibly do so.
Even if you could have a reasonable opion, it’s worth bearing in mind that you are now multiplying very large numbers by very small numbers. This makes the VNM utility extremely impractical, because you’ve basically reduced choosing what to have for lunch to the problem of Pascal’s Wager.
Updating preferences as a result of decisions
In making its reasonableness assumptions, VNM theory equates two things that are distinct in any real agent: Instantaneous preferences and actual decisions over time. For example, it uses the possibility of being dutch booked as an argument against intransitive preferences.
But dutch booking is not a thing you can do to preferences, it’s a thing you can do to decisions, and you can only use dutch books to extract infinite money from intransitive decisions if the agent is forced to make the same decision every time, i.e. can’t learn from experience.
(Dutch booking is more complicated with non-deterministic decisions, but a broadly similar argument applies where instead of requiring transitivity you require certain bounds on sums of cycles. We’re not going to worry about that here)
One easy way to learn from experience is to stop accepting offers from people who are trying to dutch book you, but even without that many patterns of intransitive decision making that are exhibited during learning are not susceptible to dutch booking, because they converge over time to transitive ones. e.g. an agent playing a multi-armed bandit problem (this again assumes imperfect knowledge of probabilities which VNM theory assumes away so isn’t strictly applicable, but it’s a good example) will regularly swap its preferences between arms as it plays, exhibiting many cycles.
One reason these differ is assumed away by VNM theory, which is that you know the probabilities. In actual decision making, you learn the probabilities through making choices which refine your world model. e.g. if you look at a multi-armed bandit algorithm, even though it has a perfectly well defined utility function, it routinely reverses its decisions as it learns more about the distribution of results of pulling a particular lever.
Even with known probabilities, real agents will change their mind, because often the only way to know how to value something is to try it. If you offer me a Durian, I’m probably going to say yes. If you offer me a Durian a second time, I’m probably going to say no. This isn’t inconsistency, it’s just learning from experience.
Another way decisions can change is because choices you make have actual consequences. For example, consider a betting agent. Given a choice between two monetary lotteries, one of which satisfies the Kelly criterion and one of which doesn’t, they should pick the one that does, or in general the one that maximises the expected logarithm of their bankroll after making it, but then after the lottery is run their bankroll will be different, and so their future preferences will change.
Now, I already know the objection, VNM theory is about states of the world, not individual decisions, but this objection falls flat for several key reasons:
- It justifies the reasonableness of its assumptions based on patterns of decision making, not behaviour of preferences, because without any consequence for preferences anything goes.
- The number of possible states of the world is very large, which runs into the problem that it is poorly suited to large and unbounded domains, so it is also bad for this.
Where does VNM theory work?
Basically it doesn’t. VNM theory is bad, even in situations where utility theory is good. It is a poor normative argument that people have taken far more seriously than it deserves, with most of its foundation points failing to actually be viable for real agents.
That doesn’t mean utility theory is never useful. In general, people will converge over time to some sort of approximately utility maximising behaviour for some utility function in small domains that can be well understood and that have strong incentives for maximisation. Even there you will tend to run into the Fredkin’s paradox problem, with significantly more nondeterminism at the decision margins than VNM theory would suggest.
It does mean however that in large unbounded high uncertainty environments with agents who learn their preferences over time and actually impact the state of the world, VNM theory cannot possibly predict their behaviour, and its claims to have normative force over their behaviour don’t hold up.