One way AGI might not happen

Basically, comp sci hasn’t had its arse handed to it yet. 

Some people think some physicists are arrogant condescending arseholes. It’s more true to say that most physicists reliably become arrogant and condescending for a short period of time during their career. I know I did. Learning a science that, for hundreds of years has been zeroing in on fundamental truths of the universe will do that to a person.

Sitting in a third year university physics lecture, my mind buzzed as it took flight with the implications of the progress of physics. It’s true that physics can solve any problem in principle. Mechanics (how things move) and electromagnetism (how charged and magnetic things interact) are solved so long as one doesn’t venture too far down into the atomic and up into the atmospheric. A comfortable regime with excusable, cloudy limits.

Third year physics roughly matches the physics knowledge of a century ago. In content and confidence. At this time and for centuries before this, many of the smartest people were simultaneously confident and afraid. It was very easy to logically conclude that the universe was predictable. We know the laws of how things behave and so as long as we can solve the equations (and measure things), then we can predict the future. The ability to simulate our clockwork universe led to a cacophony of concerns. What would happen if someone could predict the future? What then is free will or consciousness if everything can be simulated? Can we really know exactly how arses are handed?

Philosophers were pontificating over these extrapolations when the computer was introduced. The room-sized wearables gave physicists the tool to finally deliver on the promise of prediction via equation solving. By simply turning up the dial of computation, they would be able to solve any equation and predict any physical system. So they tried.

In about the sixties, their hitherto reliable technique of solving motion started to produce confusing and unexpected behaviour. A new thing got in the way, a crack started to be exposed. Chaos theory occulted the future and mooned us.

Nowadays physicists put chaos theory in the “too-hard-and-conveniently-also-trivial” basket and think: “Of course you can’t solve every problem. Of course imperfect measurements lead to imperfect results. Butt if you honestly compare this pessimism with the previous optimism, you see that we actually lost an entire class of knowledge that philosophers and scientists assumed we would have. It turns out that not only big and small act as limits to our knowledge, so does any extreme. In chaos, long time scales naturally obscure knowledge from us in any nonlinear physical system (nonlinear = any system we would be interested in looking at).

So, in an irony worthy of humanity, the arrogant arseholes got their arses handed to them, whole. Just as if someone had something in which they themselves were a mere opening of given to them, physicists and philosophers left the concept of unbounded prediction behind. Chaos theory was a kind of trump card confidently thrown down by Nature. Unfortunately it prevents a whole class of simulations and knowledge. On the plus side it avoids the problems of a clockwork universe, and leaves us with a comfortable regime for us to simulate. A pretty good outcome considering.

At the moment we have “solved” intelligence in the sense that we have theories of how things learn, and we can build programs that kind of emulate how neuron networks fire. These programs seem to do smart things. Surely we just turn up the dial, increase the memory and voila! No hint of a gifted arse in sight. It’s so tempting to see the possibilities but nature has a habit of throwing down trumps.

Chaos theory won’t stop AI. I suspect that super-AI will prove to be unattainable because nature will supervene some new law on intelligence. What might this limit look like? A tradeoff between concentration and computation? Optimised decision making can be fooled and so only a mix of smart and “stupid” decisions are robust? Omniscience leads to self destructive depression? Impetuousness outperforms hedging in sufficient contexts? Any of these would do fine to knock the wind out of our sails. Such a cap could also contribute to a narrative on why humans each evolved to be a certain butt varied level of intelligence.

Understanding how AI is possible is hard. You need a pretty strong understanding of computation and logic. To envision how AGI could emerge is similarly sophisticated. To think we just turn up the computation dial again and linearly get an inevitable, smarter, more god-like intelligence seems a conspicuous fumble in sceptical rationalism. Take it from a crestfallen physicist who, like his discipline had to look the gift ass in the mouth. At every step nature fights scientific progress and every now and then she shuts down avenues completely. The smartest people for hundreds of years thought humanity would out-smart nature and predict it. Now the smartest people think we’re so smart so as to build something that can out-smart ourselves. Don’t overestimate humans, nature made us. Don’t underestimate humans, after all nature made us.