DRMacIver's Notebook
Model Monocropping
Model Monocropping
What is a model?
There’s a good passage about models in the beginning of Cailin O’Connor’s “The Origins of Unfairness”:
Because the book addresses many related phenomena these discussions necessarily vary in levels of carefulness. Some of the models presented are tied to the relevant phenomena quite tightly. Others are suggestive of the phenomena, but the details are not filled in as meticulously. This means that the explanatory role of the models will differ from case to case.
In some cases, the models discussed can be thoguht of as providing “how-possibly” information. If something can evolve in an evolutionary model under basic conditions, we come to believe that these conditions are enough to possibly support the evolution of that behavior in the real world. In other cases, I take the models to have deeper explanatory power, giving us insight into how some patterns of behavior may have potentially emerged. The difference here is not in the models but in the epistemic role they play. In the “how-potentially” cases, the models are intended to increase our confidence in the potential of a process to really have occurred. In particular, many of the examples of how-potentially modelling in the book will involve what Weisberg describes as minimalist idealization, where the model pares away causally irrelevant factors to reveal candidates for the underlying causal variables responsible for a phenomenon. In still other cases, I will argue that the models discussed play an important epistemic role by outlining the minimal conditions for certain social patterns - especially inequitable ones - to arise, regardless of how those patterns were actually generated in the real world. This kind of “how-minimally” modelling is especially useful in thinking about intervention. For instance, suppose we intervene on real groups via implicit bias training. If inequity emerges under minimal conditions that do not include biases, we should not expect this intervention to fully solve our problem.
Importantly, sometimes the same model will play multiple epistemic roles. For example, I provide models of the emergence of gender roles that I think potentially illuminate how these patterns emerged in the real world. At the same time, they demonstrate how such roles can possibly emerge from minimal preconditions. Altogether, the explanatory picture that emerges echoes Downes (2011), who emphasizes the wide set of explanatory roles that models can play.
Downes (2011) is Scientific Models by Stephen M. Downes, who says:
The role of models in science has been a focus of philosophical discussion for at least a century. In what follows, I provide some examples of scientific models, introduce some of the relevant philosophical discussion about models and then focus in on issues arising from a specific view about what models do: all scientific models are representations of parts of the world. I side with an emerging consensus that this unified view is not tenable. Scientific models play a number of different epistemological roles in scientific inquiry and as a result, philosophical inquiry about models should be pursued in a number of different directions.
I am reminded of a quote from Gelman and Shalizi’s Philosophy and the practice of Bayesian statistics:
Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence.
A model is in some sense a toy world. It captures some salient feature of the real world in a way that you can reason about more concretely. These toy worlds idealise the real world, typically to some very large degree, but in doing so they have a powerful advantage over the real world: They can be manipulated and tested, and reasoned about from the omniscient view. The models contain uncertainty, but the uncertainty is generally purely probabilistic, not Knightian.
This sort of abstraction of the real world is how mathematics works in general - creating abstract objects that we can set up correspondences with, that are not themselves physically real, but that we can use to understand reality better through analogy.
This can illuminate our understanding of the world in a purely descriptive way - the model as hermeneutic resource, a tool of interpretation, one of the stories we tell each other - but of course one of the key advantages of a model is that you can use them for decision support.
As I’ve argued before, the way to think about decision making is not to try to imagine what an optimal idealised decision maker would do, but to think instead about doing a decent job under bounded resources. In this sense, models are clearly often going to be useful despite being “wrong”, if they help you make better decisions than you would at random.
In this sense you can think of models as members of a domus - abstract objects that we have domesticated to our service. When the models tend to result in bad decisions, they get culled, and new ones form in their place.
It will rarely be the case that these models get selected for truth exactly, because truth is complex and not always that useful. Instead they will be selected for whether the errors they make are inconvenient. A model that is more accurate on average but less accurate in cases where it causes a critical error will tend to be selected against.
Statistics, in this view, is a tool for model generation (especially parameter tuning) and selection (model checking lets you cull models that produce worse results).
Of course, the passive voice there hides who is doing the selection. In a domus, evolution is no longer just a blind force, but is guided by intentional choices made by the people involved.
As the saying goes, all models are wrong, but some are useful. But useful to whom?
We have a tendency in agriculture to monocropping: Finding the “best” crop and just doing more of that. The results are less interesting and fragile, and tend to ignore certain externalities in their endless optimisation (this is why storebought tomatos are boring). This sucks if that monocrop is less suitable for you - a society that bases all their diet on wheat sucks if you’re gluten intolerance.
We also tend to count “best” heavily weighted towards power, which tends to neglect the needs of those with less power, who are typically minorities. Therefore the errors in our models will tend to be heavily weighted towards minimizing ones that will affect those with power.
This is one of the reasons it’s important to seek out niche nerddoms - they will tend to provide you with models from outside the monocrop.
Often these models will be worse in some sense. They will work less well if you’re in the majority. Often they will be poorly thought through or optimised compared to mainstream models. This is because they are optimised for a smaller number of people for whom those mainstream models were optimised for.
Nevertheless, their inclusion in a healthy model ecosystem is vital, because they capture aspects of reality that we would otherwise ignore, because they come from people for whom those aspects are of vital importance. Further, often through synthesis they allow us to come up with hybrid models that work better for everybody.
There was an interesting discussion on letter wiki that John Nerst linked to recently that contained the following gem:
New Atheism, to my mind, never grappled honestly with this fact about human nature, often preferring a speculative “mind virus” theory, which disparaged religion as a kind of cognitive parasite and suggested that we could, if only we inoculated the vulnerable, rid the human species of the nasty pestilence of religious belief.
I don’t think this was dishonesty as much as a form of projection. There are people for whom an established and popular idea never makes sense and they believe that they worked this out rather than just had a psychology for which it didn’t work. It is then easy for them to assume that they can bring other people into this mindset. I see this with queer theorists. They are often people who lie outside of norms themselves when it comes to gender presentation and sexuality and they then rationalise & moralise this and try to get other people to do it but other people don’t want to because they don’t lie outside the norm.
I in fact think both the religion-as-meme and queer theory models are very useful ones that are worthwhile to have in a healthy, functioning, memetic ecosystem.
(I also think a lot of people who are mostly “in the norm” could benefit from at least a bit of queer theory, but it’s true that most people are probably right not to bother)
I do agree that both tend to get overapplied by their advocates, but I feel the world is richer for them.
The reason they get overapplied is that we’ve all bought too heavily into the idea of the model monocrop. We don’t just want our models to be useful, we want them to be true, and that must mean that other people’s models are false, and so we must argue our model into dominance.
But this isn’t healthy. The solution is not to replace the monocrop with a better one, it’s to end monocropping altogether. If all knowledge is connected then one consequence of that is that the problems we want to solve are too complicated to perfectly model with anything short of reality itself, which defeats the purpose of having the models.
Instead we need something closer to model permaculture, holding a number of (potentially highly inconsistent) models, which we fluidly switch between as and when it seems needful.