DRMacIver's Notebook

Testing Positive For vs Having

Testing Positive For vs Having

From Sorting Things Out: Classification and its Consequences by Bowker and Star, page 176:

…it was decided that those who tested positive did not have [tuberculosis], they were “considered to have tuberculous infection but not the disease” (Diagnosis 1955, 15). Only those who could bring other evidence of disease to the table would be considered worthy of the classification of pulmonary or nonpulmonary tuberculosis.

This is an interesting distinction. It reminds me of the distinction between HIV (the virus) and AIDS (the condition caused by the virus).

More generally, and tangentially, though I think we are seeing people becoming increasingly aware of the distinction between testing positive and having a disease in another way: You might test positive and just not have the disease. Equally, you might test negative and still have the disease.

I’ve written before in There’s no single error rate about how you need to think about error quite carefully, and that for any classification scheme there is an error rate per possible answer. In this case you can incorrectly say that someone does not have the disease, and you can incorrectly say that they do.

This is a good article about the error rates in testing and what they mean for diagnosis.

I know at least two people who have tested negative for COVID-19 but still seem pretty likely to have had it, due to symptoms and circumstances. That’s not to say that you should ignore the fact that there’s a negative result - it should certainly make you seem less likely to have it - but it’s far from indisputable proof.

I think I’m pretty likely to have had it. I’m currently saying 90% likely (this is a made up number, but I’m pretty good - albeit far from perfect - at making up numbers to gauge my level of confidence).

So what would it mean if I got tested with an antibody test for whether I’ve had it?

Although I’m generally quite thumbs down on Bayesianism, this is an example where Bayes rule (well, conditional probability anyway) is obviously the correct thing to do as a way to reason about testing. If we make up more numbers and assume that the probability of a negative result given that I am infected is 10%, and the probability of a negative result given that I’m not infected is 95% (i.e. 5% of noninfected people will falsely get a positive result), then I assign 9% confidence to being infected and getting a negative result (90% chance I’m infected multiplied by 10% chance that if I’m infected I get a negative result) and a 9.5% chance that I’m not infected and get a negative result (90% chance of not infected, 95% chance that if I’m not infected I’ll get a negative result), so for a total of 18.5% chance of a negative result.

Note that, because of my high prior confidence that I’ve been infected, it’s almost as likely that I get a negative result but still have it as that I get a negative result and don’t have it. Specifically, if I get a negative result the chances of my having it are \(100 \times \frac{9}{9 + 9.5} \approx 48.6\%\). i.e. despite the negative result I’m only barely more likely to not have it than I am to have it.

Does this mean that the tests are useless?

Well, maybe. Certainly a negative test result doesn’t tell me much, but there’s also not much I’d do with a negative test result anyway: A negative test result just means that I have to keep doing what I’m already doing, which is maintaining a sensible level of caution about catching the virus. Whether that’s because I actually haven’t had the virus, or whether it’s because I’ve had the virus and the test wrongly reports that I have not, it doesn’t really matter for my decision making - I’ve got no new information that would allow me to change behaviour.

A positive test result is potentially more useful. Suppose similar numbers: 90% of people with the virus test positive, and 5% of people without the virus test positive. After a positive result I’d be about 99% certain I’d had it, and thus that I was pretty likely immune.

I still wouldn’t be certain that I had had the virus, or even that if I’d had the virus I would be immune, but behaviours that were previously unacceptable risks would now become acceptable risks.

This wouldn’t change my behaviour much in that I’d just go out and start licking people, but it would change some of my risk profiles (I’m currently being significantly more careful than the government guidelines suggest, with one or two exceptions) and so I can use that to adjust my decisions. I’d probably still keep away from supermarkets, but I’d probably be a lot more willing to hang out with friends outside.

Another use case is that even if tests are quite inaccurate at an individual level, they might be useful at a population level: If you know the false positive and false negative rates pretty well, and you implement mass testing, you can be pretty confident of the percentage of the population who have had the virus, and plan accordingly, even if you’ve got significant question marks about any individual person.

In the other direction, the government was criticised (somewhat correctly) early on for the lack of widespread testing, and widespread testing is good and important, but there was at least some reasonable logic behind not testing everyone with symptoms: If you have acute symptoms, you need to go to the hospital regardless of whether you have it, and if you don’t have acute symptoms, you need to stay home regardless of whether you have it, so the test did not (at that point) add that much useful information. It is, at best, lightly informative of treatment and of the decision of whether you should be admitted to hospital. Often effective action requires triaging what information is worth acquiring.

(This is not to say that I think that the UK government response to COVID-19 has been anything less than shambolic, but some individual actions are not as obviously unreasonable as they seem, especially given other constraints caused by the rest of the response)

Ultimately I think this highlights two important things about knowledge:

  1. There is absolutely no certainty, only degrees of confidence (some of which may be so high that they might as well be certainty, but often you cannot wait until you are that certain).
  2. The acceptable margins of uncertainty significantly depend on what you’re going to do with the knowledge, and you’ll often have to just be comfortable with the fact that you might be acting on wrong information.