When the absolutist and relativist mindsets are wrong

10AM Jam
7 min readJul 21, 2021

Improve your intellectual hygiene with this quick exercise.

When can we apply absolutist and relativist thinking?

We could only ever hope to make correct absolute judgements in simple, well-defined situations. Only if we know everything that can be known about a given situation with an absolute certainty can we allow ourselves to form a black-and-white, fixed answer. Some math- or chess problems are good areas to make absolute judgements in — and only because the rules governing numbers/chess figures and the possibilities of their manipulations are known, clear, well-defined, all-encompassing and stable. I wouldn’t question the authority of a chess grandmaster if he says the white wins in two moves. For the visual types, that would be like being able to see the problem from any angle and distance. It may not be a fully rational process — gut feel may still be necessary to arrive at a solution, but then that solution can be rationally verified.

Outside of abstract, man-made realities, nature MAY also be governed by rules this way — but we haven’t been able to stumble on rules like that yet. We do not know its known underlying rules entirely, and almost certainly don’t know them all. As a result, we can’t define exactly what a single sub-atomic particle will do under various circumstances, let alone large masses of those, let alone living organisms, let alone large masses of those. Since we can rarely be certain WHY a complex system does what it does, we satisfy ourselves with heuristical rules that only predict outcomes with some certainty (hence the saying “correlation is not causation”). In this realm, we have to say: “in light of our current understanding, this is the outcome we expect” — i.e. answers are only somewhat certain and always relative to our point of view (say, the parameters of our experiment), and clouded by the unknown amount of lack of knowledge. Visual types can imagine looking at the problem from a particular vantage point only — two reasonable people in that situation can scientifically agree the problem looks a certain way, but since it is only one limited viewpoint, it is bonkers to think an absolute position can be derived.

Which limited viewpoint describes the complex shape in the middle?

This is the large majority of our reality. Outside of a few well-defined, unchanging, knowable situations, our insights are always relative. It is normal and accepted that a given drug will heal one person from a given affliction but not another— we are not surprised as we are used to applying a relativist mindset to this relative situation. Observing from a limited viewpoint and simply saying “but I know this in my bones to be true” doesn’t cut it, as most bone-knowledge turns out to be wrong very quickly.

When absolutist thinking is wrongly applied to not fully defined situation

Our brains are efficient pattern-finding machines. We don’t mind to put in the challenging mental work to find neat enough answers (answers sized and shaped to fit our headspaces) to our realities, and armed with those, we can be amazingly effective in our familiar surroundings and over short time-horizon. Unfortunately, this laziness, I mean efficiency of the brain means that once it arrives at a position, it doesn’t want to let all that hard work to go to waste, so it imagines the insights as absolute truths and engage in the super-damaging activity called confirmation bias. As these simple absolutes come under attack from the complex environment, the brain will also have to develop close mindedness and arrogance to defend these positions.

“It is easy to obtain confirmations or verifications for nearly every theory” — Karl Popper

Usually, this is easy to discern. The duck test is usually sufficient — if someone looks, sounds and walks like an arrogant and close minded person, they will most likely be an arrogant, close minded person. Humans are pretty good at detecting bias OF THE OTHER.

(Note: it is a bit harder when we need to remember the limits of a person’s or institution’s authority. If a chess master gets involved in a messy, badly defined situation (say, implores you to vote for a political party), we should immediately recognise he is no longer the absolute authority we must trust, like he was on the chessboard. The aura mustn’t transcend the domain, the cult of personality or institute mustn’t cloud our critical thinking.)

But sometimes the misguided absolutist thinking is so well-hidden that its arguments can be hard to counter in timely fashion. Consider the following example (to which I have struggled to respond for years):

A UK or USA patriot complains about some foreign power interfering in their country’s political processes. In responding I almost always point out that their country has been doing the same to others (maybe that very same one) on a much bigger scale for decades. The angered simple patriot at this stage will usually dismiss my argument as “whataboutism” (the idea that you can’t justify one wrong with another).

For a long time I failed to understand this reaction. Their complaint would have merit if I was careless with my counter-examples and I ended up using false equivalence or failed to distinguish cases, like someone arguing to a judge that since violence is wrong, beating our sickly grandma is the same as hitting a street thug who attacks us. But way too often, even when I find a very fitting, fair counter-example (like in the foreign interference example), I get accused of “whataboutism”. What causes this unfair response? What is the intellectual blind-spot?

The surprising answer: thinking in absolutes in a relative world. Using our example, we all know foreign politics is changing, partially knowable, ill-defined, and therefore relativistic domain. We hardly understand how an individual operates, let alone masses of those individuals. Empirically, we know it includes messy rules like “to every force there is a potentially unequal and not always opposite reaction”, which well justifies my use of the counter-example. The accusers’ argument rests on either:

a) boldly proclaiming groundless absolute rule in a relative setting, like “yes, but under this scenario it is ok, under that it is not” or

b) restrict themselves to a single issue, ignoring links of the complex system, atomising our reality until it appears so simple/well defined that absolutes, like in a chess problem, seem to be applicable, at which stage they feel entitled to say “I’m not commenting on that, I’m simply saying THIS is not ok”.

Either way, an absolute element is shoehorned into a relative problem. Now when I point out the relevant error, the accuser never has a comeback.

How do we detect dangers of absolutist discussion-partners? Those drawn to absolutist thinking (and in addition in in this case, patriotism) tend to be very simple minds with strong cognitive biases, seeing things in black-and-white, prone to extremist/fundamentalist positions (ideologists like libertarians, extreme religious types, etc.). They may do well on IQ tests (a collection of simple, well defined problems) and on some technical tasks, but are unfortunately given to imagining themselves to be intellectually superior just when in this messy, complex world they hinder the rest of us much more than help. For these people, employing absolutist mode of thinking to avoid cognitive dissonance is only natural — when their tool is a hammer, most problems will look like nails.

When relativist thinking is applied to absolute situation

Absolute situations tend to be easy to settle. Trying to avoid check-mate when the chess master says it is unavoidable is a foolish thing to do that tends to be quickly over and not attempted again. This is much rarer also because there are so few absolutes of importance we have to deal with. However, every now and again, if it suits the demagogues, it can be forced onto our conscience. Consider the following, very consequential example:

Many of the needs of humans are in very narrow range. Given the same conditions, our bodies need the same amount of water, calories and micro-nutrients to maintain body weight and health. We need similar body temperatures to feel comfortable. Our minds need the same amount of stimulus and variety to thrive, our souls the relationships, love and purpose. Because these needs are so uniform and depend on so little else, they come pretty close to absolutes. But these absolutes are ignored by apologists of inequality, who are often enabled by the language of economists and political leaders. When we are bombarded by relative rules like “you need 70% of your pre-retirement income for comfortable retirement” or “relative poverty line of say, 50% of median income”, you need to notice the message that random, changeable, outside circumstances like the current level of economic development of your immediate environment are put in charge of dictating your needs — biology be damned.

We seem to accept that poverty means different things to different populations

(One of the rare absolute measure addressing the absolute nature of our needs is World Bank’s absolute poverty line, but that is set at such a meaninglessly low $1.90 per day that it matters not to any thinking person. The $1.90 level exists and set so low only so that we can pat ourselves on our backs to show what a good job we’re doing eradicating “poverty”. BTW, this $1.90 is on purchase power parity terms — don’t think it is more than it appears and that the poor buy things cheaper.)

This error doesn’t often arise simply from human mental weakness — this is often very deliberate. In this case, notice that if we are conditioned to think in relative terms when it comes to income and wealth, then billionaires don’t have to justify their relatively inordinate wealth that exist to satisfy the very same absolute needs either. What a painful economical disfunction we could avoid if only we would learn to recognise the mixing of absolute and relative.

--

--

10AM Jam

Lifelong learning in technology, economics, sociology, music and travel