#24 On not needing to know

Dear Jamie,

I knew we’d crack it. While you could “judge political systems” according to your rule, let’s consider a system that embraces this rule:

Make decisions according to how great a chance they have of discovering and thereby achieving new goals, in perpetuity.

My logical consistency sense is tingling, so I pause. The structure of your system has kind of a Bertrand Russel ring to it. If we support all goals that achieve new goals, then who shaves the barber? Wait, no it checks out. Let’s keep moving.

The veil of ignorance (we talked about this back in the podcast era right? I’d forgotten all about it) is a better rule of thumb than most. Still, it doesn’t sit well with me. When looking at this rule of thumb my dynamical systems sense tingles because it seems to be damped. Ambition and experimentation would be checked, progress slowed. In dynamical terms damped systems tend to converge and have regimes permanently inaccessible to them. A Rawlsian system has behind it an kind of averaging function which collapses diversity in order to achieve an implicit equality. At its heart it’s a system based on fear.

It’s also a system that requires a precise knowledge of how complex systems work. You need to know exactly how to improve the good of most to then set out to do it. No one knows how to do this. Thus if we really wanted to do good as rational actors we might be afraid to do anything, paralysed. Luckily humans are stupider than this. In reality we have an ideology that arms us with ideas and invigorates us. With this momentum, we use our amazing abilities in justification to convince ourselves and others that it works. All parties fight it out, with results proving the outcomes, and so we stumble to enlightenment.

Your rule is open and chaotic, i.e. good. At all points it looks forward, and I believe matters of justice would be solved merely incidentally, because in hindsight it would turn out that the justice was enabling of new goals. Some matters of injustice will never be solved so in that sense it’s as bad as any other. Unlike the milquetoast Rawlsian rule, this feels like it has an engine behind it. Not an engine, that’s a linear force, more like rooting rabbits, an exponential engine, it’s alive.

Your system also as a hidden power. At first glance it also looks like a decision rule that depends on knowledge. How can you achieve new goals if you’re not sure what they are in advance or exactly how to achieve them? But I believe the spirit of your rule saves it from paralysis without recourse to internal ideology. Because every idea is a goal, whereas every idea is not a good. Large, small, good, bad, consistent, inconsistent, ugly, sexy, whatever, by thinking of them as “goals” rather than “goods” we can try them without needing to reinforce them with justification. You don’t need to convince people it’s a good idea, you don’t need to be consistent, you don’t need sophistry, you don’t need to be right for the right reasons, you don’t need to signal anything, you don’t need to regret impossible outcomes, you just need to do it.

Your first convert,
Mat