#### Re: How random is random?

*If you instead create a pool and stir it, you would (in my mind) break all the relationships between all the numbers in the pool. Does that increase "randomness"? *

Depends on which definition of "randomness" you mean.

First, though, note that you don't "break all the relationships". Even a "perfect" cryptographic hash function, if such a thing even has a sensible definition, can't introduce new information entropy. So while the "stirring" process does *hide* those relationships, by discarding some entropy (compression) and rearranging what remains (mixing), it can't eliminate all of them. That would cause it to produce information out of nothing; you'd have the information-theoretic equivalent of a perpetual motion machine.

Now, as to the question of "increased randomness": This approach does not increase randomness in information-theoretic terms. Under Shannon's definition of information entropy, randomness is the same as information content, and you can't produce more information by encoding the message differently.

Similar results apply to Kolmogorov's three definitions of information content, or Chaitin's. Under the algorithmic definitions offered by Kolmogorov and Chaitin,^{1} this sort of "stir entropy into a PRNG" construction (which as others pointed out is widely used) has only a small constant increase in information (or randomness) over the entropy source - and that is the size of the smallest program that can implement the stirring algorithm.

However, we can also talk about other definitions of randomness. Statistical randomness, for example, is a matter of how random - pattern-free - the output appears to be under various statistical measures. The stirring process, if it's good, should increase statistical randomness.

We can also talk about practical randomness or unpredictability. Outside straightforward statistical analysis, a pseudorandom sequence might still be predictable with a significantly better probability than guessing, for example by training a Markov model to recognize patterns in it. Here, too, a good stirring mechanism should defeat feasible implementations of predictive algorithms. Ideally you want output that's incompressible in Chaitin's sense - the smallest program for producing it is as large as the output itself. That's impossible with a PRNG that produces unbounded output, but the bigger you can make that hypothetical "smallest program", the better.

There is a ton of material on this subject - both the theoretical stuff (all the folks I named above, and others) and the practical material from cryptographers and cryptanalysts who've looked into CPRNGs. But the short answer^{2} to your question is "it depends on what you mean by 'randomness'".

^{1}Invented independently but at pretty much the same time. They were both inspired by Shannon and the metamathematical intellectual tradition (considering mathematical formalisms as objects of mathematics in themselves): Turing, Godel, Church - who in turn had been inspired in no small part by Hilbert's Entscheidungsproblem, which Chaitin has rightly identified as probably the most useful unsolvable problem in mathematics. It more or less led to the entire IT industry, which is a pretty good result for a failed project.

^{2}Too late!