#23 On besting what one knows

Dear Mat,

I have to compliment you on a masterpiece of scepticism in your last letter. Describing different political systems, such as democracy versus autocracy, you say:

Such a complex system means that it’s easy to pair every success of a system with a mechanism one likes, and problems in a system with the lack of something one wants. In reality, given some person’s beliefs, we should expect them to like some parts of government that actually don’t work, and to hate parts that do.

So the only thing I can be sure of is that the best government is probably not the one I think would be best. And certainly not the one I hope would be best.

I love this. But how can we escape this scepticism or even relativism regarding our own perception of what’s good and bad in a political system? Certainly, elements of liberal democracy resonate with what I like and the problems resonate with what I dislike. But there must be some meta-measure that goes beyond this.

I guess the good old veil of ignorance of John Rawls was one attempt. Imagine you don’t know what position you might occupy in a hypothetical society. How would you want rights and equity distributed throughout? It’s a skin in the game thought experiment designed to help you prescind (fucking great verb) from your own biased perspective. But it says nothing about being static or dynamic.

Can we improve on that? What about this. Judge political systems according to how great a chance they have of discovering and thereby achieving new goals, in perpetuity.

Thus, regardless of what one’s personal favoured goals for society are, the best system will be one that is likely to help those goals or other people’s actually be discovered or achieved. Then there’s the perpetuity caveat, ensuring they can’t be goals that harm the chances of future goals.

Some of our current goals include reducing suffering, promoting equality, maximising freedom (with certain constraints), increasing knowledge, satisfying god, increasing standard of living, creating new conveniences through technology, yada yada yada. Some of them are more conducive than others to discovering new goals that can themselves lead to other goals (open-ended). Our future cyborg selves, or automaton-ant overlords may have different goals that we should pave the way for.

I haven’t thought about it much yet, but I think this test works. E.g. some nutter has a goal of worshiping the great leader as an infallible sky-god: that’s rejected because it will impede future goal discovery and achievement.

Here’s a less banal example. Someone wants to return to low-impact agrarianism to save the environment. That’s rejected because it will prevent the future discovery of new, open-ended goals.

Final example: people say we should disseminate modern knowledge as widely as possible, despite threats posed to traditional cultures’ integrity. It would be accepted because it promotes new goals in otherwise static cultures and should in itself achieve other existing, acceptable goals like raising living standards and increasing freedoms.

Alright, that’s political philosophy sorted.