if you run the same program repeatedly with the same inputs, it will always produce the same results
Well, unless you're using Windows... and I'm not joking actually.
A significant rewrite of the Linux kernel's random-number generator is underway, ensuring Linux-based cryptography is a bit more secure, particularly in virtual machines, and some software a bit smoother to run. As outlined by the author of the changes, Jason A Donenfeld, the newly released kernel 5.17 contains the first stage …
during a DH key exchange there's a random value, your "secret" that is only known to you, and the other side should have a similar secret.
If one side re-uses this secret, it can severely weaken the DH key exchange.
Using /dev/random. if the algorithm is both fast AND random (using entropy), each DH key exchange could ideally use its own random "secret" which is ideally also a prime number. This is why some servers _might_ choose NOT to look up new "secret" values and re-use them, from a pool or for everyone (whatever).
So KUDOS to the Linux devs for doing this. Crypto-safe random numbers from /dev/random: a VERY good thing.
(as for the symmetric encryption keys themselves, they too can be generated on the fly via /dev/random if it is crypto-safe)
......because one side of the DH exchange can be random AND DIFFERENT for every message!
To be clear.....
(1) Each user chooses a very long random number (possibly prime). This will be hundreds of decimal digits long; this number is the user's private token.
(2) The private token is used (via the Diffie/Helman algorithm) to generate a second number; this is the user's public token.
UserA publicises their public token, say PUBLIC_A. Their private token is kept secret, PRIVATE_A.
UserB wants to send a message to UserA.
(3) UserB chooses a very long random number (possibly prime). This is UserB's private token, PRIVATE_B.
(4) UserB uses their private token to generate a public token, PUBLIC_B.
(5) UserB prepares a message.
(6) UserB calculates the Diffie/Helman secret key (using PUBLIC_A and PRIVATE_B), and encrypts the message.
(7) UserB sends a two part communication to UserA: the encrypted message and PUBLIC_B.
(8) UserB destroys the secret key, PUBLIC_B and PRIVATE_B.
UserA wants to decrypt the message
(9) UserA calculates the Diffie/Helman secret key (using PUBLIC_B and PRIVATE_A), and decrypts the message.
(10) UserA destroys the secret key and PUBLIC_B.
(A) The secret key only exists for a very short time in steps (6) and (9).
(B) The key is different for every message.
(C) Anyone possessing PUBLIC_A, the encrypted message, and PUBLIC_B has no chance whatever of calculating the transient secret key.
An added benefit is that if the three processes (publish a public token; create and encrypt a message; decrypt a message) are supplied as a software package, then NEITHER USER knows (or needs to know) anything at all about the secret key in steps (6) and (9).
Lot's of discussion elsewhere about testing a stream of numbers to "ensure" that the sequence is "truly random"........
But here's a thought experiment which puzzles me.
(1) Suppose we are tossing a fair coin and recording the results after each toss.
(2) Suppose during the trial we get twenty HEADS in a row, one after another.
Q1: This sequence is actually random....is it not?
Q2: How long after the twentieth HEAD should I wait till the sequence starts to pass the tests that everyone talks about?
......but I shouldn't have to wait at all....the twenty HEADS is actually random!!!
You don't wait. You decide *before* performing the experiment that you'll do twenty tosses, and the probability of error you're happy with. You get 20 heads - the probability of a distribution that unbalanced is 1 in half a million; if that's less than your suspicion threshold, you declare the coin biased.
“Generating truly random numbers in pure software is non-trivial.“
It’s more than non-trivial! It’s impossible!
A pure software implementation of a RNG will unavoidably be a PRNG, which will eventually repeat it’s sequence.
In addition to the output being deterministic with a known seed or internal state.
Both attributes which are very far from being truely random!
The moment the output is changed by a non-deterministic source outside of the PRNG, it’s not a purely software RNG anymore.
Every time I look into kernel random, I see obvious signs of back-door sketchiness everywhere (hint: throw some printf()'s into the seed calls and generate some keypairs).
It's a trivial no-brainer to XOR or otherwise securely mix multiple algorithms, but we keep seeing everyone throw out one basket and use another one to continually keep all our eggs in.
Why are we putting 100% of our faith into just one algorithm designed by just one country all the time? There's dozens of them out there, from many different countries.
We should be COMBINING the output of a range of hash functions sourced from a number of different countries who are known not to co-operate with one-another. that's the only possible way to guarantee security.