I’m forced to skip over those first few paragraphs based on “economic “modelling because as you said, a little bit of information changes the outcome completely. This is a symptom of a bad model, and bad models do more harm than good. My favourite demonstration of this is the following (originally Taleb’s).
Imagine you find yourself, for no real reason, waking up in the middle of unknown wilderness. Then imagine you stumble upon a map. Should you follow it?
Once I proposed this online in response to an Effective Altruism question about how to act on economic models). Someone took the bait by responding “It depends if the map is reliable or not”. Well that’s a wise application of scepticism, but its missing something as we’ll see.
You look around, discover that some mountains match mountains on the map, and blue lines match the flow of rivers around you. You can triangulate your position on the map. Should you use the map?
Things are looking pretty good now and most people are happy to “follow map if it’s accurate”. But all he’s done is replace one potentially erroneous thing with another (hopefully less) potentially erroneous thing. He’s replaced the potential error in the map, with a potential error in his judgement of the map. If the accuracy of the map is important, the rule should actually be “Follow map if it’s accurate, if your determination of accuracy is accurate“. This can easily lead to an infinite regress unless you’re in a scenario where errors can truly be reduced (like scientific experiments).
Things can even get sinister when you start to think about the map at a higher level. How was it created? Who put it there? Just where is that X pointing to?
In these cases it’s best to throw away the map. The obvious question is, then how do we act? And the obvious answer is “Whatever we did before we invented maps”. Heuristics, rules of thumbs, feel your way around locally, learn from error. The infinite regress of the decision rule can be solved by accepting the inadequacies of human intelligence and giving up rationalising. The funny thing is that if you’re acting with local heuristics, the “rightness” of what you do doesn’t really factor in because there’s no map to be erroneous.
A key idea here is that it is possible to act without a map. The idea that you must employ a map (whether it’s science or ideology) I’ll call the problem of the confident sceptic. In it you defer to a model of thinking that you believe will get you closer to the truth. Education makes one a confident sceptic. But as long as you’re deferring to a model, you’re excluding yourself one route of action, using no map and taking things as they come. So the only way you can guarantee that you can’t exclude the good is to be a doubtful sceptic and throw away all maps when you’re uncertain. Most people in the world are doubtful sceptics. Another way to describe them might be as people with humility.
You say the threat of the regressive left is “overblown”, which is how politicians describe problems they can’t deny (but that doesn’t mean you’re wrong). The thing is it totally makes sense to me to insult SJWs. Because they’re acting with a map (ideology) that map looks “obviously” good from where we’re sitting but really no one understands it.
Peterson takes it further and thinks the map is sinister as fuck. Here are what I think his answers would be when he thinks about the map. How was the map created? It was formed as an unmediated aggregate of ratcheting levels of dedication to equity above all. Who put it there? It’s the re-emergence of a persistent flaw in humanity for submission, to yearn for order and surrender the responsibility of individuality. Just where is that X pointing to? Nowhere good (he has also suggested “the rise of the feminine totalitarian state” but that’s a whole other thing).
This reminds me of your astonishment that anyone could criticise the effective altruism movement. I remember Rob Wiblin once asked for criticisms of EA for a parody he was writing. I had previously felt a tingle of criticism but I couldn’t articulate it when I started to respond. I ended up concluding that I couldn’t articulate it because I was wrong, and that EA was as perfect as you could get. This could actually be true, especially given he was crowdsourcing criticism of his own venture the purposes of comedy and improvement. That shit’s like pornography to a sceptical mind.
There’s a long line of people to criticise before EA folks. They don’t take the map for granted like SJWs do, they question themselves and experiment. But they’re still confident sceptics, they’re hamstrung by a map that they can’t jettison. Jettisoning it would be to admit things are too complex and pack up shop. In the meantime they’ll use the best available tools and the best available maps. They might have the best model out of anyone, but that doesn’t mean it’s any good, and it doesn’t mean that they’re on the right path.
Also published on Medium.