How I think about morality (having the benefit of my undergraduate degree in psychology and philosophy)

A friend on Facebook asked:

  • In what circumstances is censorship the right answer, or is it /ever/ the right answer?
  • Are there specific types of content that ought to be limited/ censored?
  • Do the morals (or other qualities) of the director/producer/writer/actor/artist etc. warrant censorship?

I responded:

I spent many years thinking about philosophical topics like this and my conclusion was that there are certain areas in life where people tend to ask subtly malformed questions. For example, it seems to me that censorship isn’t a matter of “right” and “wrong”, it’s simply a matter of a power struggle between different groups. It’s like asking “in what circumstances is a lion catching a zebra the right answer?”. I think “what is the meaning of life?” is another example of a malformed question. A math example would be something like “what color is the number five?”.
https://en.m.wikipedia.org/wiki/Is%E2%80%93ought_problem
https://plato.stanford.edu/entries/hume-moral/#io

She responded:

is this the same as moral relativism? For societies to function, there must be some moral ‘absolutes’ though, right? Otherwise there is no law… But how does one agree on those, and to what extent should deviations from those laws be screened from the public eye? As mentioned in the response above, I’m -in general- opposed to censorship, but am trying to find instances where censorship is the only recourse, let’s say for society to adequately function.

I responded:

“For societies to function, there must be some moral ‘absolutes’ though, right?”

I’m not totally sure I understand what you mean by “absolutes”, but: after a lot of classes and a lot of thinking about it, my understanding is that there’s a part of our brain (the prefrontal cortex) that is inclined to make consequentialist judgements, and another part of our brain (I forget which) that is inclined to make deontological judgements. The trolley problem is a great example of when these two parts of our brain are in conflict: the utilitarian part wants to pull the switch so that the trolley kills fewer people, while the deontological part doesn’t want to pull the switch because it would mean that you’re “responsible” for the death of the person the trolley ends up killing.

So, people do have these parts of our brain that tend to think in these ways, but beyond those broad strokes, the details seem to be determined by the environment that a person is raised in. So those could function as the “moral absolutes” you’re talking about: the shared brain structure and the shared environment / learned rules of behavior.

But there can be a lot of disagreement because people don’t all have the same environment growing up (“My parents taught me to do X in this situation!” “Well, my parents taught me to Y!”), and beyond that, a lot of people’s behavior is just determined by “what’s good for me?” or by whatever pressures they’re subject to. So you can end up in situations where, for example, people vote for some policy (like censorship) not because they’ve made some judgement about the morality of it but instead because they were paid to vote that way, or they were urged to vote that way by their pastor, or whatever.

So when I think of these kinds of questions (like the censorship one), I think not so much in terms of “What ought to happen?” and more in terms of “What will happen?”, like if it was a physics problem. I actually imagine billiard balls bouncing around, where the question is trying to predict the future state of things.

Another way of putting it: when you’re asking an “ought” question, you need to be very precise and explicit about what you’re trying to optimize for / what the “ought” is trying to optimize for. Are you trying to maximize your own average daily reported wellbeing over the course of your life? Are you trying to maximize the duration for which a nation known as the “United States” continues to exist? And it’s probably not possible to optimize for everything that you’d like to optimize for. But if you don’t have a precise goal you could end up feeling lost, like you can’t figure out a satisfying answer and you can’t understand why.

I highly recommend checking out Joshua Greene (psych professor at Harvard) and his dissertation:

The Terrible, Horrible, No Good, Very Bad Truth About Morality and What to Do About It

To answer the last part of your question:

“As mentioned in the response above, I’m -in general- opposed to censorship, but am trying to find instances where censorship is the only recourse, let’s say for society to adequately function.”

Censorship is very common during times of war / life-or-death struggle, when the very survival of (you/your family/your nation) is at stake. Censorship was widespread during WW2. IMO that’s a good example of how these moral judgements (“censorship is wrong!”) can be subordinate to a judgement of “What’s good for (me/my family/the nation)?” They’re just a set of rules that people agree to abide by as a compromise with each other, but when things get crazy, the rules go out the window. There’s nothing objective about them.

If you found this interesting, you can follow me on Twitter.