DRMacIver's Notebook

Dust Motes and Electric Plugs

Dust Motes and Electric Plugs

In the last post I talked about incommensurability, how in general you can’t compare things and say that one is better than another.

This is absolutely true, but the reason (that context matters) is important: Without context, you can’t say one thing is better than another. With context, you’re probably going to have to choose. That choice doesn’t have to be stable, it doesn’t even have to be deterministic, but you’re still going to have to choose, because not choosing is often the same as choosing by default to go with the status quo. That may be fine sometimes, but often the status quo isn’t so great.

The Trolley Problem is very annoying, but one of the reasons it is very annoying is that it deliberately strips the decision making of context. Implicit in the trolley problem is that you can’t ask questions. Variations are tried to tease out your moral intuition (the fat man version, the surgeon version, etc), but within the frame of a particular trolley problem you are not allowed to defer your decision making to context.

Nevertheless, in trolley-like situations you are going to have to choose. You have access to the context, so you don’t have to choose contextless, but you do have to choose.

Why the moral machine is a monster is a paper I really like which talks about the application of the trolley problem to self-driving cars and why you shouldn’t: Software isn’t ethics, it’s policy, and if you consider the behaviour of self-driving cars having implemented a version of the trolley problem, you need to consider not just the effect of individual ethical choices, but also the way people shape their behaviour knowing that cars implement those choices.

In general this is what policy is: Decisions that are made in the absence of a specific context, to be implemented in specific cases using context only to the degree that it is legible to the policy makers.

The safety in complex systems literature is a good source of evidence around how people behave in the presence of policy: Which is that they often violate it. Some of this is normalization of deviance, and some of this is because policy can never be sufficiently powerful and flexible to adapt to local context.

Nevertheless, sometimes you have to set policy.

This is particularly true with coordination problems. When it comes to walking you can rely on people deciding which side of the path to walk on but with cars you actually want to set some policy as to which side of the road to drive on.

This is a harmless example because it doesn’t actually matter which side you pick, but there are other less harmless ones where you’re forced to make hard trade offs.

There are plenty of less harmless ones.

Another annoying thought experiment is torture vs dust specks, which asks whether it is better to torture someone horribly for 50 years or to give each of an inconceivably large (more than the number of atoms in the universe by some inconceviably large number of orders of magnitude) a speck of dust in their eye. Yudkowsky thinks the answer is obvious, and so do I: The answer is that the question is bad, and he should feel bad for posing it.

That being said, I reluctantly agree with what I presume is his position, that the torture is probably the preferable option, and we can see this with a (somewhat) less ridiculous variant: Would you rather one random person on earth have their eye painfully removed, or every single person on earth have a slightly annoying dust speck in their eye?

(Note: This is a question of axiology, not of ethics. I’m, for the moment, not asking you to choose policy or make a decision, I’m just asking which of these situations are better).

I think the answer is obvious: Removing a random eye is better.

Why?

There are currently about 7.5 billion people on the earth. Suppose each of those people had to choose between touching their eye to remove a minor inconvenience, and having a one in 7.5 billion chance of losing that eye. Would they take it?

Of course they would. People touch their eyes all the time. COVID-19 is teaching us just how often we do it. This has potentially bad consequences, such as disease transmission, and eyes can definitely get infected to the point of losing them as a result of being touched. It’s not very likely, but it doesn’t have to be very likely for this to matter, all we have to argue is that people shouldn’t credibly believe that there’s a less than one in 7.5 billion chance of something at least as bad as losing an eye to happen as a result, and I think the reality is that the answer is that there’s a significantly greater than one in 7.5 billion chance.

What does this have to do with policy?

Well, consider electrical plug design. A friend of a friend once pointed out that there’s an interesting trade off implicitly being made in the design of the UK sockets. UK sockets are among the safest in the world from the point of view of preventing electrocution - it’s really very hard to electrocute yourself on a UK socket (though a sufficiently dedicated small child with exactly the right mix of ingenuity and stupidity can do it, as I learned the hard way). On the other hand, a UK plug can lie on its back in a way that makes it essentially a caltrop, and some people will step on that and hurt their foot (possibly falling over and injuring themselves badly! This too is a fatality risk, especially for older people). So, here we are presented with a policy trade off between injured feet and electrocutions. You can’t play a “Would you rather?” with these exactly, but people confronted with policy do absolutely have to trade these two off against each other. You can potentially improve both, but it will likely come at the cost of something else - possibly time and money as you roll out a new standard.

Another example of this is speed limits. Why is the speed limit not 20mph everywhere in a city? Why are cars allowed in cities at all? Either of banning cars or reducing the speed limit to 20mph everywhere would have a significant impact on mortality rate, but it would also be incredibly inconvenient for a lot of people, so currently policy is prioritising a large amount of inconvenience against a certain amount of death.

We’re also seeing this in COVID-19 planning. Closing schools is an example where policy has chosen a truly vast amount of inconvenience against a relatively small amount of death (possibly partly because the error bars on “relatively small” are really quite large). I’m not saying if I think they’re right or wrong to do so (this isn’t a thing it’s useful for me to have an opinion on), but I sure am glad I don’t have to make the choice.

A long time ago, Paul Crowley convinced me that the optimal decision for this is always going to take the form of a linear trade off between these. Say, we might choose that 1000 inconvenienced parents is worth 1 death. The argument only works if the set of available policies is convex but it usually is. I don’t think we’ll necessarily be able to make good choices about those trade off parameters, andindeed think usually trying to make good choices is probably impossibly hard work, but it’s probably often worth thinking about lower and upper bounds, especially ones that are revealed by your preferences in other ways.

For example there are, say, about 3 million cars in London (it’s a bit lower than in terms of ownership but people also drive in. This isn’t a precise measure). There are also about 100 road fatalities per year in London. So at the bare minimum we’ve decided that moderately inconveniencing 3 million people for 4 days is worse than a single death. It’s not obvious that COVID-19 policy is consistent with that trade off, but equally there are arguments that it shouldn’t be (policy during a crisis needn’t reflect policy outside that crisis).

I am not a utilitarian. I think it’s a bad ethical framework, and people shouldn’t use it for individual ethical decisions, but I am increasingly coming around to the view point that it might be a good ethical framework for designing policy. You need to be careful about this, because trying to impose simple rules on complex systems often destroys them, but if you’re going to set policy this is probably the sort of thinking that you need to engage in.

One of the difficulties with this is that it implies that our individual ethical decision making process should be very different from those of people setting policy. The result will often be that they do things we regard as outrageous like attaching monetary value to a human life, which they need to do in order to make policy like this.

This seems bad, but I increasingly feel that it seems bad for the same reason that national debt looks horribly fiscally irresponsible if you look at it like individual debt: We’re trying to analogise between two things that are actually much more different than they look at first glance.