The state of a brain is obviously highly compressible. And, the deltas to that state, when you turn the snapshot into a movie, ditto. So my prediction is that, whoever first comes up with the most efficient and natural algorithm for compressing brain state will be the first to achieve hard AI.
Almost five years ago, in a book review I had posted "Theories Are Compression Algorithms". And, more and more, I think that compression algorithms might be What It's All About (as opposed to the hokey-pokey).
It is somewhat reminiscent of another old post "The Universe Is Information", talking about Wolfram's "A New Kind of Science", and its premise: that the base substance of the universe is information, and all physical processes are computations.
This had led me to kind of conclude that the smallest system that could simulate the universe was the universe itself -- seems somewhat self evident. But, with compression algorithms, that might no longer be the case. Via data compression, we can conceivably describe the state of the universe in an amount of matter considerably smaller than the entire universe.
But if we do that, then the versions of ourselves in the simulation could presumably use compression algorithms and model their state, and so on ad infinitum. Or could they? Would the laws of, what, information processing (???), be different in the simulated universe?
Back in the day, I had pictured our pulsating universe as a wavicle in a larger universe, and each wavicle in our universe as its own pulsating universe, going up and down for say 10^640 or so levels, but with the snake biting its tail so that there is no top or bottom (or more importantly, no privileged frame). Now we can picture that same model, but with compression algorithms on the information of the universe as the mechanism of how we move down to the next level. Hmmm.