DRMacIver's Notebook
The obligation to be who you are
The obligation to be who you are
This post is extremely in the weeds. If you don’t care about moral philosophy, it’s possibly not for you, though I do think the claim I am making has important practical implications beyond the theoretical argument. Also I ran out of steam towards the end.
Anyway, the following footnote to yesterday’s post about responsibility sparked some discussion about moral relativism.
I’d like to stake out a particular position. First, I’m going to need to stake out some idiosyncratic terminology.Possibly many of these have existing terms in the philosophical literature, or the terms I’m using mean something subtly different in the literature, but I’m a programmer so I haven’t done the reading and have to rewrite things in my own terms to understand them.
First, let’s sketch out what I mean by an moral framework. I’m not going to define this precisely, but I mean something like a coherent body of theory and practice that lets you do things like make judgements about “good” and “bad” and in particular to make “should” claims, especially about behaviour. Ethical frameworks let you make judgements about specific questions, and also have a body of principles and practices for making arguments about moral claims.
The boundaries of specific moral frameworks are fuzzy, and it can be hard to point and say that two things you might think of as moral frameworks definitively are the same, but it’s often easy to point out that they’re definitely not the same by finding subjects on which they disagree. e.g. someone operating in an moral framework that embraces vegetarianism will disagree with someone who is non-vegetarian’s moral framework around wether eating meat is bad.
Moral realism is the position that there is oneUp to isomorphism. correct moral framework and all others are good to the degree they approximate that framework.Note that this doesn’t require making a claim that a particular moral framework is the objectively correct one, or even that the objectively correct one is in principle fully discoverable.
Moral relativism is the position that there are at least two moral frameworks that disagree on crucial details and that you cannot decide between in a non-arbitrary way.Note that, at least how I use it, moral relativism is compatible with the claim that some moral frameworks are objectively better than other moral frameworks. It doesn’t require that you can’t decide between any two moral frameworks in a non-arbitrary way, only that there are genuine degrees of freedom in how you adopt an moral framework.
I’d like to then subdivide these further:
Abstract realism/relativism is the set of claims of realism and relativism respectively as regards to whether the moral framework is theoretically defensible. i.e. it’s a claim about which one is in some sense “objectively correct”.
Practical realism/relativism is about how you orient to other people operating in other moral frameworks than your own, and whether you treat their moral frameworks as valid for them. This is more a matter of degrees than the theoretical version is I think (or at least, certainly is a matter of degrees, and if the theoretical one is then it’s not obvious to me).
I think it’s clear that everyone operates on some minimal degree of practical realism, in that people disagree about ethics without needing to constantly argue about it, and it’s quite important that in a multicultural society people be allowed to do this.Even if you think multiculturalism is bad, you still have to deal with the fact that other cultures exist and you have to interact with them. I will grant that someone who is a total cultural isolationist may not be operating on any degree of practical relativism, but that would be one of those moral frameworks that I do think is objectively worse than others.
But I’d like to argue for the compatibility of a particular moral claim that demonstrates the compatibility of strong forms of abstract realism with practical relativism, based on the following claim: Given two people, let’s call them Alex and Charlie, who agree on a shared moral framework (which I’ll make certain implicit basic assumptions about not being too alien from my own) as the “universally correct” one, it is possible (and indeed likely, although I won’t argue for this too strongly), for Alex and Charlie to want to adopt more specific moral frameworks for their behaviour that make stronger claims than their shared framework, and each of which make moral claims that are incompatible with the other’s.
In this view, even if Alex and Charlie are both abstract moral realists and think that their shared moral framework is objectively correct, they become practical moral relativists of a particularly strong form: Each has their own “moral truths” that do not apply to the other, and they consider both binding for themselves and non-binding for the other, without thinking the other is wrong for holding their own moral framework.
First, let’s consider two categories of example where Alex and Charlie make different moral judgements about their own behaviour that are not examples of relativism, but will point us in the right direction:
- Should I kiss this attractive person who is enthusiastically hitting on me (and who satisfies all relevant criteria such as being e.g. above the age of consent, unrelated to me, uncommitted to anyone else, etc)?
- Should I build a bridge that I’ve been asked to build as part of my job?
The first hinges on a particular person-centric question: Do I have any obligations not to do that? e.g. am I in a committed monogamous relationship with someone else?
This isn’t moral relativism in that the actual moral principle being adopted is something like “If you do not have any obligations that are not committing you to not do so, go for it buddy”. What differs is not the moral framework but the individual’s specific obligations.
Note that this already gets you quite far towards relativism. If e.g. you profess to be a devout Christian, this creates a significant set of obligations to behave as a devout Christian, even if those moral commitments are not ones that you would have in generality.For a sufficienly Christianity-incompatible objective morality you might of course have an obligation not to be a devout Christian. No comment here on whether that’s the case, just highlighting the conditional.
The second depends a lot on whether you’re good at building bridges or not. If you are, and it’s your job (and you’re not in some outlandish situation where the existence of the bridge itself would be immoral), then yeah you probably should. If on the other hand you have no bridge-building related skills, you should signal strongly that this is not your thing and no you will not build a bridge because it falls down.
Again, similarly to the above, there is no actual conflict here. I think you can quite straightforwardly make a moral principle of “don’t do things you don’t have the skills to do, especially if peoples’ safety is on the line” which both Alex and Charlie agree on and that just happens to cash out differently depending on their respective skills.
Both of these point to a sort of “scoped moral framework” of a type that I think should be perfectly uncontroversial even for the most adamant of moral realists (though they might object to labelling it a “moral framework”): The moral framework you get when you take your broader moral framework and specialise it down to specific characteristics of the person in question. The body of judgements etc of what’s good and bad for someone who is a monogamous artist, or a poly engineer, or…
These scoped moral frameworks are, from a logically omniscient point of view, just a subset of the broader moral framework. You just delete the bits that don’t apply to you, and your behaviour will be perfectly in line with the broader moral framework because all the bits that could apply to you are the same as the broader framework’s judgements.
But we’re not logically omniscient. We’re finite beings. And as a result when you cut down your moral framework more of it comes into focus and you get to see how more of the parts of it interact. If you regularly have interactions of a particular type, even if you derive those interactions from first principles in your broader moral framework, you’ll get better at that particular action and subsequently others like it, and this will clarify some of your obligations…
Here’s a trivial example: When you leave the pub, do you take your empty glasses back to the bar?
If you’ve worked bar, or you know other people who have, you are probably aware that this makes their lives much easier, especially if the pub is busy right now. Having the relevant life experience makes you aware of this, but the facts of the matter are derivable from perfectly general principles, you just haven’t noticed.Or possibly have a genuine disagreement about your obligations in this space! But even without this genuine disagreement you can have a different awareness of obligations.
Here’s another similar example that went by recently:
I think this is perfectly obvious if you have the relevant life experiences, and does not require those relevant life experiences to act on, but is easy to miss.
Everyone is going to have their own collection of these little moral dutiesIf you don’t like the word duty here, me neither. Feel free to pick some other word. I don’t have a good word that means “like a duty but not really fully obligatory, just good to do” and while I like the Islamic labels I don’t know how to idiomatically apply them here. that they adhere to. Even when they are all derivable in principle from their shared understanding of what leads to a moral duty, you’ll only have the ones you happen to have learned, and the specific set will be idiosyncratic to you.
More, it’s not even totally clear that these should all “port upwards” to be general norms, because of our finiteness. I think it’s pretty plausible that the collective set of these little moral duties is too large for any one person to keep in their head and reliably implement without exhausting themselves, and that we’re better served by a degree of heterogeneity where different people keep track of different ones. I think it would be far too happy a coincidence for the exact distribution that occurs to be the morally optimal one, but I think it’s at least plausible that it is morally optimal for there to be some distribution like it.
This creates a, I think intrinsically quite relativistic, stance of “I acknowledge that this duty would be good to take on, and that it is good that you have done, but I’m not going to choose to adopt it”. Perhaps this is a cheeseburger ethics thing but it feels different to me.
This also ties into the skill issue. Many of these duties exist because you notice things, and one of the key things expertise does is change your perception of the world. Possessing a skill, even when you’re not actively using it, can cause all sorts of things to come into focus that you’d otherwise miss.
Note for readers: It was at this point I started running out of steam in writing this and wanting to wrap it up, so the rest will be more of a sketch than I intended it to be.
The other thing that I think drives these sorts of individual moral duties is something like character. e.g. our professions don’t exist in isolation. If we’re an engineer, we are implicitly committing to being the sort of person who can do engineering well, and this will tend to bleed into other areas and we should probably let it.
For example, another thing that came up in discussion of the responsibilities post is that, as (responsible) software developers, when problems occur with systems we see, we want to fix the system not just the problem, and this isn’t widely shared. It seems both good to retain this habit, both because it’s generally useful and also because it’s part of the habit of character that allows us to be good at our jobs, but also it’s sortof hard to argue that it’s a moral duty for someone who doesn’t have a great deal of professional feedback on how to develop this skill to also do these things, and a little unreasonable for us to expect them to understand it when we try to point this out.Though I still want them to fix the issue, dammit. This is the skill issue again, but running in the other direction: It’s not just that the skill alerts us to obligations that we already have, but that trying to suppress the use of it is in some sense detrimental to our character as developers.
All of this remains fully compatible with the possibility of a shared broad moral framework, but the problem is that you can’t really implement the fully worked out broad moral framework on humans. If Alex and Charlie have their own moral frameworks with their own well thought out details, there almost certainly is a moral framework that unifies the two - it’s the shared framework, as implemented by someone who has full knowledge of all of Alex and Charlie’s experiences and possesses all of their skills. The problem is, taking that union quickly exceeds human capacities.
For my part, I find it easier to think about this in terms of people just having different moral frameworks. There are practices for sharing things between our moral frameworks and a broad practice of moral discourse, and in general we should expect our moral frameworks to be more or less similar to those we share characteristics with and discuss our lives with, but there remain plenty of incompatibilities where I can acknowledge your moral duty as a moral truth for you, but choose not to take it on as my own..