back to article Boffin: Dump hardware number generators for encryption and instead look within

Hardware-based random number generators (HWRNGs) for encryption could be superseded after a Philippines-based researcher found that side-channel measurement of the timing of CPU operations provide enough entropy to seed crypto systems with the necessary randomness. In a paper presented on Saturday at the International …

  1. Anonymous Coward
    Anonymous Coward

    Now this is interesting. It addresses the switch problem in Linux deciding whether to use HWRNG or software based, decided for now by the distro, with something resembling higher entropy.

  2. Anonymous Coward
    Anonymous Coward

    I was convinced linux already did this? That with the kernel entropy pool no matter how much non-random data you mix in, the pool is always at least as entropic as it was before*. So there is nothing to lose, and absolutely everything to gain from hashing in keystrokes, network packet arrival times, interrupt service times, and there's probably some timing loops for kernel housekeeping tasks that could/are mixed in too. Or 10 gigabytes of /dev/null. It's always as random as it can get. Not sure what the point of his paper is honestly.

    * assuming you don't do anything stupid, like give an entropy generator access to its own output :P

  3. Version 1.0 Silver badge

    "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."

    -- John von Neumann, 1951

    1. TRT

      Yeah, but this is the digital equivalent of giving hard sums to a class of fourth graders and making an actual use of their somewhat varied and imaginative answers. Even the one which is a drawing of a cat eating the maths teacher and pooing them out the back.

    2. Destroy All Monsters Silver badge
      Holmes

      This is not an arithmetical method. It gets "time" information from an outside oracle.

      Arithmetical method == Deterministic RNG.

      Now, let's think what the compiler or even the instruction-optimizing CPU will do with this weird workless inner loop.

      1. jvroig

        Yeah, that's exactly a limitation for the C prototype, and any C implementation of it for production.

        (JV Roig here, cited paper author).

        This isn't a limitation of my design. It's a C thing, and even Stephan Mueller's Jitter Entropy has the same caveat to never compile with optimizations.

        However, I do have prototypes in other languages (Python3, Ruby, PHP), and those need no such hand-holding. They just run as is. (The siderand webpage that Tom linked contains all the prototypes and the measurement tools)

        In fact, as of today, if you were to ask me what the ideal implementation would be in systems that support it, I'd choose Python. It's not significantly slower (we only need to seed rarely), and it makes the code directly and easily inspectable and auditable even in live environments.

        Of course, embedded devices are limited to whatever their dev environment is (so, embedded C). In such cases, they just have to be careful to not compile the code for the seeder. I wish I could remove that small caveat completely, to avoid "oops!" moments, but so far I don't have a good alternative.

  4. Flocke Kroes Silver badge

    Just tested it

    Version 1: The time drops rapidly for the first nine samples, then remains fairly constant with the last digits showing five or six bits of entropy. Multiple runs show the way the time drops for the first 9 samples is quite consistent. Around 70 unique samples per run.

    Version 2: 3 unique samples per run with the most common turning up 75% of the time and the least common usually first.

    Version 3: 2 unique samples with the most common turning up 96% of the time.

    Version 4: Same as version 3.

    Version 5: Only about 30 unique values per run.

    Version 1 was not optimised. Version 2 used -O2. Version 3 diverted output to a file instead of pasting output from a window into a file. Version 4 moved the printf to a separate loop from the sample generator. Version 5 was like version 4 but without the -O2.

    Conclusion: Use with lots of caution. Make absolutely certain your test code and production code use the same compiler options. Much of the randomness comes from "printf" and what it outputs to.

    1. Electron Shepherd

      Re: Just tested it

      The code as published in the article is not going to produce good results as soon as any kind of optimisation is enabled. Since the total variable is not used at all, a compiler could validly either move the for (j=.. inner loop outside of the for (i=... outer loop (so it would only execute once, regardless of the value of samples, without affecting the result), or, more likely, simply remove it completely.

      You would need to look at the generated assembly to make sure, but for most compilers, you would need to do something like

      total = val1 + val2 + i;

      and print the value of total at the end. Even then, the compiler would probably move the val1 + val2 part out of the loop, calculating that sub-expression once and using the result inside the loop.

      1. Jonathan Richards 1

        Re: Just tested it

        Hurrah! We're going to have to start writing in assembler once more, so that the compiler optimization doesn't subvert the purpose of the algorithm!

        1. Destroy All Monsters Silver badge

          Re: Just tested it

          Yeah and then the CPU optimizer kills your loop dead because it just doesn't do anything, does it.

          (This comment will not set in the El Reg moderation queue for 20h)

          1. Anonymous Coward
            Anonymous Coward

            Re: Just tested it

            Yeah and then the CPU optimizer kills your loop dead because it just doesn't do anything

            So make it do something. Personally I think instead of hardcoding the two values to be added, I'd just call rand a couple of times and shift/xor those values to the total, and then just store the total (which is meaningless) alongside each real seed value.

            As long as they're stored outside of this function and accessed somewhere else, the optimiser should leave those values and the code that generates them alone. Tricking optimisers isn't usually that difficult. (The hard part is tricking them into doing what you want them to do.)

          2. onefang

            Re: Just tested it

            (This comment will not set in the El Reg moderation queue for 20h)

            I think that only happens on the weekend, the queue seems to be shorter during office hours.

        2. jvroig

          Re: Just tested it

          (Hey, JV Roig here, cited paper author)

          Or, more simply, just specifically compile without optimization.

          Another good alternative (for systems that support it) is use a non-complied language - I tested prototypes on Python3, Ruby and PHP as well, and they run as-is with no need to worry about any optimizing compilers.

    2. jvroig

      Re: Just tested it

      Yeah, it comes with the caveat of "don't compile with optimization".

      The whole point of the "algorithm" is just to measure the time the CPU does work; if you let the compiler remove all the work, then of course there's nothing for us to measure.

      If you chose the clock() timer (this is usually the lowest-resolution timer in most modern systems), then that's your worst-case results (again, not counting the optimized ones). Using the nanosecond-level timers will improve your score. If you're in Windows, the jump will be extreme, because for some reason Window's default timer is super low res.

      But even with just 75% MFV (most frequent value), you're already golden. Collect 1,000 samples and you've got 400 bits of entropy, more than enough for seeding. The versions of the POC after the cited code here switched around the SCALE and SAMPLES settings - I found it was more efficient to lower the scale (how many times to loop before measuring) and increase the samples (how many measurements to take).

      Even an Arduino Uno (measly 16MHz CPU with a low-res 4-microsecond-precision timer) gets to collect 3,000 bits of entropy per second. That's already the super low-end of results.

      Anyway, all these and more are in the updated supplementary site: http://research.jvroig.com/siderand/

  5. Shady
    Joke

    For best performance...

    .. the author of the code really should have cached the output.

    1. jvroig

      Re: For best performance...

      The whole "printf" thing isn't really production code. It's just there to enable the research to gather the entropy and analyze the results using frequency distribution.

      I know that part wasn't clear in the article, so just explicitly saying it here now.

      The prototypes store and output the way they do because they're only meant to collect and output entropy for experimental verification (not just in C, but also the Python3, Ruby and PHP prototype codes; see http://research.jvroig.com/siderand/ for all prototypes and measurement tools as well). They're all designed just to see how much entropy we can gather across all these types of machines - from Arduino Uno, RPi, small-core x86, to big-core x86 machines.

  6. Anonymous Coward
    Anonymous Coward

    Why would you avoid using the HWRNG?

    If you have something else you think will generate randomness using it AND the HWRNG. Even if the HWRNG is rigged if its output isn't directly used you can still get a few more bits of entropy by combining the two sources.

    1. Nick Kew

      Re: Why would you avoid using the HWRNG?

      I read it not as "avoid using the HW", but rather "avoid relying on the HW". Subtle difference.

      Of course for the purposes of a test run for an academic paper or even a back-of-envelope calculation ("Just tested it" comment above), results that avoid it altogether play an obvious role. For real life, you take all sources you can get!

      The main issue with any proposed approach is the difficulty measuring entropy from a RNG. No matter how good your test and attack tools are, they could be missing a weakness someone else has cracked. Debian-vs-OpenSSL history kind-of demonstrates there's a genuinely hard problem.

      1. Anonymous Coward
        Anonymous Coward

        Re: Why would you avoid using the HWRNG?

        Which is why you want as many different sources as possible. The odds that one may be compromised in some way are a lot higher than the odds that ALL are compromised.

  7. Persona Silver badge

    Round and round we go

    Early versions of Netscape's SSL used a "random" seed derived from the time of day, the process ID, and the parent process ID. It seemed like a good idea, but needless to say researchers were able to guess the encryption keys and everyone was recommended to use hardware random number generators. Adding more seed variables helps but I remain dubious as it is inherently repeatable. I prefer to trust a simple hardware random number generator that uses something like diode noise which is random down at the physics level.

    1. Norman Nescio

      Re: Round and round we go

      I prefer to trust a simple hardware random number generator that uses something like diode noise which is random down at the physics level.

      You can only trust it if you built it yourself.

      Becker, Regazzoni, Paar, Burleson: Stealthy Dopant-Level Hardware Trojans

      the Trojan passes the functional testing procedure recommended by Intel for its RNG design as well as the NIST random number test suite. This shows that the dopant Trojan can be used to compromise the security of a meaningful real-world target while avoiding detection by functional testing as well as Trojan detection mechanisms.

      1. Persona Silver badge

        Re: Round and round we go

        Whilst technically possible it's not an attack vector you need to worry about. If someone wants to target you that seriously we know from the Snowden disclosures there are easier ways to steal everything you type and everything sent to your screen.

        1. Norman Nescio

          Re: Round and round we go

          Whilst technically possible it's not an attack vector you need to worry about. If someone wants to target you that seriously we know from the Snowden disclosures there are easier ways to steal everything you type and everything sent to your screen.

          I agree that most people don't need to worry about it...however, some people do, and those that are not targetted can be caught in the crossfire*. As it is a dopant level Trojan, there is nothing to stop this (or something very much like it) having been rolled out across all cpus of a particular type, and it is possible that it could have been done without the manufacturer's explicit knowledge (serve an NSL on a few key technicians). Much like Intel's Management Engine or AMD's 'Secure Processor' (formerly known as PSP) is present in pretty much all commercially available x86 cpu you can buy, it may not be possible to avoid a Trojanned RNG. Unless you find a statistical test that demonstrates the RNG has been Trojanned, it passes standard statistical tests, too.

          Until the Dual-EC-DRBG malarky, most people would think such a thing was pure 'tinfoil hat' territory.

          Most people and companies are not specific targets of interest to the security and intelligence services, and as you say, don't need to worry about this. Some entirely legitimate commercial organisations do have to worry about such things - for example, if your activities are covered by the Wassenaar arrangement, you do.

          It's certainly not a bad idea to run as many statistical test suites as possible, but they never prove that the output is truly random, whereas a failure demonstrates the output is definitely not random.

          *Not least, if a malicious entity gains the knowledge of the vulnerability and uses the knowledge to exfiltrate and/or change data for monetary gain.

          Further reading:

          Stack Exchange:Cryptography - What tests can I do to ensure my random number generator is working correctly?"

          MERS: Statistical Test Generation for Side-Channel Analysis based Trojan Detection

          International Journal of Open Information Technologies vol. 3, no. 5, 2015: Performance analysis of Hardware Trojan detection methods

          1. Persona Silver badge

            Re: Round and round we go

            Not really. If you roll it out widely it's going to get noticed, consequenty it would need to be triggered. So if you have it and it's not triggered, it's not an issue,

  8. Alan J. Wylie

    Sounds very familiar

    LWN article: Random numbers from CPU execution time jitter (2015) and HAVEGE: a linux "random" number generator that relies on instructions taking an unpredictable number of clock cycles to execute.

    1. Anonymous Coward
      Anonymous Coward

      Re: Sounds very familiar

      My thought exactly, therefore I went ahead and looked at the paper itself to see if the authors referenced HAVEGE. They have, if only in passing and not exactly favourably (they claim, without actually discussing that claim mind you, that HAVEGE is both poorly understood and overly complex). Therefore, what the take-home message of this paper *really* is less of "look to the CPU jitter for good randomness", which indeed is nothing new, and more of "here is how you can use CPU jitter to generate good randomness IN A STRAIGHTFORWARD WAY".

  9. DropBear

    No. I'll stick with a hardware RNG any time, thanks.

    1. Jonathan Richards 1
      Joke

      @Sticky DropBear

      "Any time". Haha, I see what you did there.

  10. Red Ted
    FAIL

    Very platform dependant

    On a number of configurations of embedded systems this will product no entropy at all.

    "HWRNGs are, by nature, black boxes, unauditable, and untrustworthy, so they're out,"

    I think he will find that there are auditable tests that can prove to your chosen level how random a HWRNG actually is.

    1. Nick Kew

      Re: Very platform dependant

      Citation required.

      Seriously, I'd be interested in anything reputable that purports to be an auditable test. I'd've thought it was one of those problems where you can prove a negative but only speculate on a positive.

    2. jvroig

      Re: Very platform dependant

      Hey Red Ted,

      JV Roig here, the cited paper author.

      Testing on embedded devices is indeed a problem. However, I have tested on a lot of platforms that are a good stand-in to the embedded device market:

      -Raspberry Pi 3 (quad core, ARM Cortex A53, 1.2GHz, in-order-execution)

      -Raspberry Pi 1 (single core, ARM11, 700 MHz, in-order-execution)

      -Arduino Uno (ATmega328p, 16MHz, in-order-execution, low-res timer only [4-microsecond precision])

      The worst case for my micro-benchmark timing aggregation technique (or "SideRand" as still cited in this article and paper, but that's the old name) is the Arduino Uno. Yet, even there, gathering 3,000 bits of entropy per second was achieved. So for now, I'm pretty confident that micro-benchmark timing aggregation will work on all sorts of devices, embedded to giant servers.

      HWRNG randomness "audits" are unfortunately easily spoofed as these can only measure the output, and not really infer anything about the source. Imagine a malicious HWRNG that merely iterates through the digits of pi, say, 10-20 digits at a time, and hashes that using SHA256 to give you 256-bits of randomness. That'll look super random and pass every test. But, seeing just one output, the adversary who backdoored that to use pi digits can easily predict what the next one is, and what numbers you have gotten previously.

      Or imagine Intel's Secure Key HWRNG. The final step it does, if I remember correctly, is AES output. If it's basically AES-CTR DRBG, then it could just be incrementing a hard-coded key (from whatever favorite boogeyman you have, like the NSA or Cobra). It will pass every single test you throw at it, and the NSA can still perfectly predict the output.

      In a nutshell, all that a statistical test can tell you is that there aren't any obvious patterns, and that your HWRNG isn't obviously broken. Whether it's actually unpredictable is an entirely different issue.

  11. Paddy

    How good?

    Yes, but does it pass Big Crush: https://en.wikipedia.org/wiki/TestU01

  12. Bicycle Repair Man
    Meh

    Interesting effect, wrong explanation

    While the code clearly shows a variance in the time, his explanation - that it is caused by variance in the transistors - is BOLLOCKS.

    From the article, "CPUs, Roig's paper explains, contain millions or billions of transistors, which have enough variation that no two chips perform identically" This is nonsense. While over-clockers might exploit this to crank a few extra Hz out of their rigs, CPUs are synchronous beasts, so if you run the exact same code on two identical processors, clocked at the same speed, you will get the same result.

    The variance will have many sources, from the OS servicing other threads, network interrupts, refreshes on the SDRAM, to caching, but transistor variance is not one of them. If you run this code on a bare-bones processor using on-chip RAM, then I would be extremely concerned if this showed any variance at all.

    Ironically, a HWRNG possibly does use transistor variances to guarantee no two generators follow the same sequence...

    1. Aodhhan

      Re: Interesting effect, wrong explanation

      Apparently you didn't read the paper, and/or you don't understand it. It isn't about clock cycles. It's about side channel measurement of fine performance benchmarks and the differences noticed in these benchmarks between like CPUs.

      Consider, the variation in performance affecting entropy if one processor's temperature is 7 degrees cooler than another--among other performance changing variables; such as workload.

      Don't you love people who make crazy claims without at least trying to understand what is being said?

      1. Claptrap314 Silver badge

        Re: Interesting effect, wrong explanation

        I'm with you on this. Again, I spent a decade doing microprocessor validation at AMD & IBM. I wish a designer would jump in on.

        From the standpoint of timing, cpus are NOT a bunch of transistors. They are clusters of transistors gated by clocks. The term "clock cycle" refers to the fact that the electrical changes coursing through some bit of a chip are "gated" until the appropriate moment in time. That inner loop, which will be optimized to the hilt by the hardware, will execute in a fixed number of cycles barring interrupts.

        The only source of entropy in this code is the interrupts. And in a quiet system (and early boot systems can be very quiet), that's not going to generate very much noise at all.

        It might be worthwhile to take a very careful look at experimental confirmation of these numbers. They seem rather optimistic, especially during boot.

        1. jvroig

          Re: Interesting effect, wrong explanation

          Transistor variability as a factor in making results unpredictable is really just to remove the obvious concern of "well, if the target machine is using an i7-7700K, then I can know the possible values from his RNG seeder just by buying an i7-7700K myself!" I call it the "same CPU" loophole, since it kinda makes sense that having the same CPU *should* result in collecting the same timing values (all else being equal, like OS and all other platform stack components).

          But that's not so. In the cited Lawrence Livermore National Laboratory paper, they had thousands of identical servers, and not a pair had similar characteristics when they profiled them under similar load.

          As for running the same task (again, after making sure it isn't optimized by the compiler, as our point is to "make the CPU work, get running time, rinse&repeat"), there are lots of factors there other than transistor variability. Data locality, cache access, temp, voltage, task scheduling and background tasks, thread migration, dynamic power and frequency scaling... there's a lot at play, and right now it's extremely hard to account for all of them. We just know it works, because we've tested in a wide variety of platforms, from an Arduino Uno microcontroller, Raspberry Pis ,small-core AMD/Intel, big-core AMD/Intel, etc.

          The best we could do to make sure we minimize OS noise is to make sure each test is run with the absolute highest priority (nice -20). We also make sure each machine has minimal services running. For machines that are physically accessible, we also made sure to turn off network adapters.

          The Arduino Uno is probably the best case. It literally does nothing but the uploaded sketch, which is just that loop over and over, and sending it to a laptop that collects the info. It still works.

          Now, I have no doubt there needs to be more work done. If 10 years from now we want that generation of IT people to think "Huh? Why did you old-timers ever have a problem with seeding??? LOL, every CPU is an entropy machine, why did you guys ever need those HWRNGS?" and make OS RNG seeding a problem of the past and actively a non-concern, we should be working on simplifying the work loop (it has to be auditable, extremely so, to deter backdoors and other sorts of "oopsies"), testing on all platforms, and standardizing on the simplest, most auditable, and yet still effective technique across the board (all devices and platforms).

          That's where I hope this research is headed. I want the simplest way of gathering entropy, so simple that it's auidtable in one scan of an eyeball, even on live environments. And I want this simplest way to apply, mostly identically, across all devices, from embedded, IoT, phones, laptops, large servers, everything. That's the blue sky scenario. When our tech gets to the point that seeding the OS RNG requires nothing but the CPU itself, and it only ever needs to do one standard algorithm across the board, then we've hit nirvana. Anyone who audits the seeding will expect the same thing everywhere, so it's harder to get confused, and therefore harder for anyone to get away with a backdoor in the seeding process. And if we rely on just any CPU (which, by definition, is what makes a device a device), then we know all our devices will get access to stronger cryptography. If we demand manufacturers to add diodes, HWRNGs, or come up with their own "safe" implementation of a noise source, you know they just won't do it (cost factor) or they'll screw it up (we can't have people roll their own crypto, that's why we need standards and research)

  13. Steve Foster
    Joke

    Ultimate Source of Entropy!

    Use ElReg article comments as your source of entropy. Far more effective than any HWRNG!

    1. Anonymous Coward
      Headmaster

      Re: Ultimate Source of Entropy!

      You're far too predictable.

      Who am I? I post this as a/c simply to pose that question. If you thought there was even the remotest possibility of guessing from my comment, that demonstrates a level of predictability that stands a chance of telling commentards apart.

      Yes, I'm a regular commentard, and most of my comments are not anonymous.

      (Yes, I did chuckle at the joke).

      1. Sgt_Oddball

        Re: Ultimate Source of Entropy!

        So not amanfrommars then....

        1. Nick Kew

          Re: Ultimate Source of Entropy!

          So not amanfrommars then....

          ... demonstrating that you can identify patterns (thus proving that entropy isn't suitable for an RNG) without anything so ambitious as guessing the actual poster.

  14. Mike 137 Silver badge

    A diode?

    What's wrong with diode noise? It's easy and cheap to generate and pretty darned random. Just threshold it, sample it at regular intervals and convert it serial to parallel.

    1. Anonymous Coward
      Anonymous Coward

      Re: A diode?

      Except that diode noise may not be random!

      Zener/Avalanche diodes can develop a negative resistance characteristic. When used in a circuit to produce random noise, they can develop into a Relaxation Oscillator. Such a relaxation oscillator produces a very predictable waveform, which is not at all random. That's not good.

      Take a look at Figure 5 on page 19 of On Semi's "Zener Theory and Design Considerations" handbook:

      http://www.onsemi.com/pub/Collateral/HBD854-D.PDF

      Notice those little zigzags in the expanded portion of the I-V curve, and then realize that those represent regions of negative resistance. They explain this as an artifact of "Microplasma discharge theory". If the load line happens to go through one of those points, the result will be a relaxation oscillator.

      Another discussion I've found by some researchers indicates that approximately 75 percent, or more, of off-the-shelf Zener/Avalanche diodes could be turned into relaxation oscillators with the right load/voltage being applied!

      Now, for the really disgusting part. Even if the original design doesn't oscillate, the devices can be subject to a parametric shift, perhaps caused by the formation of Frenkel Pair Defects, or perhaps caused by trapped charge in the surface passivation layer, which will shift the characteristics of the device slightly. Thus, a design which doesn't oscillate initially very well may drop into oscillation after, oh say 5000 hours of use. So says the Voice of Experience!

      Dave

  15. jvroig

    Hey El Reg Peeps, Paper Author Here

    Hi El Reg!

    This is JV Roig, the cited paper author. Glad to be back! (Some of you may remember me from the Password Research article of Tom a few months ago: https://www.theregister.co.uk/2018/05/10/smart_people_passwords/)

    I was going to join the comments section earlier (Tom FYI'd me about this article a day ago), but I was busy updating the supplementary site. It's done. If you visit https://research.jvroig.com/siderand now, it contains a lot of new information that deals with the problem better.

    I've separated the concepts into two more palatable sections - first is the paradigm ("Virtual Coin Flip"), and the specific implementation ("Micro-Benchmark Timing Aggregation", which replaces the SideRand name). Please give that page a visit. Not only does it have a good discussion of those two concepts, it also contains all experimental data - for those of you interested in checking it out.

    It also deals more with previous work such as HAVEGE/haveged, Jitter Entropy, and Maxwell, particularly in how they adhere to my trust criteria for sources of randomness, and key differences with my MBTA.

    A note on reproducibility. The C code, by nature, must not be optimized. Remember, we are trying to make the CPU do work. The optimization removes this work, so there's nothing for us to measure. This is a C limitation, and you'll find that this is exactly also necessary for Stephan Mueller's Jitter Entropy.

    However, you'll find that the PHP, Ruby and Python prototypes don't need such handholding. Download those prototypes from the webpage, and you can also download the tools I used to gather and profile the resulting entropy. That's all you need to see how much entropy it can gather in your system. And of course, none of these are production codes - I imagine these will mostly be intact and similar to actual production-grade code, but primarily these are prototypes for gathering data to measure how much entropy is there.

    A final note on embedded devices: How confident am I that this works even on embedded? 100% confident. It's not included in the pre-print you've read as it still needs to be updated with newer results, but I've tested using a bunch of the original RPi 1 (700MHz ARM11, very old in-order-processing CPU), and it still works.

    In fact, I've also tested on an Arduino Uno - that's a microcontroller with a very slow, simple 16MHz processor, and a low res timer (4-microseconds). The optimized MBTA code there was able to extract 3,000 bits of entropy per second. That's overkill for seeding, even in such a worse environment (combo of simple CPU + low res timer).

    1. hammarbtyp

      Re: Hey El Reg Peeps, Paper Author Here

      While interesting there is great irony that any attempt to access the webpage with the address given results in the following message

      Your connection is not private

      Attackers might be trying to steal your information from research.jvroig.com (for example, passwords, messages, or credit cards). Learn more

      NET::ERR_CERT_AUTHORITY_INVALID

      Security Researcher protect thyself....

      1. jvroig

        Re: Hey El Reg Peeps, Paper Author Here

        Hey hammarbtyp,

        I'm looking into that now. The main site with my blog (https://jvroig.com) doesn't have a problem, so it looks like only the subdomains are borked. They're all supposed to have Let's Encrypt certificates.

        I'll check into my cpanel to see what's wrong. This is a very low traffic site, so it's only the entry-level Siteground plan. They're supposed to have this done automatically (which is why the main site is ok), but perhaps there's more involved configuration needed for subdomains.

        UPDATE: Before actually posting this message, I looked into the panel and it was totally my fault - I forgot to add the "research" subdomain to the Let's Encrypt interface. It's added now.

        1. hammarbtyp

          Re: Hey El Reg Peeps, Paper Author Here

          I know, Security's a bitch....

    2. Ozzard
      Big Brother

      Re: Hey El Reg Peeps, Paper Author Here

      Silly question: How easy might it be for a processor to recognise code corresponding to this algorithm and deliberately feed it predictable results, i.e. subvert the hardware to produce predictable RNG in common cases? If it's relatively simple in silicon terms, that feels like something of a risk.

  16. Anonymous Coward
    Anonymous Coward

    Does nobody do a literature search anymore?

    This has been done before at least twice: HAVEGE [1] is the first I know of and CPU Jitter is more recent [2]. Both of these are significantly more sophisticated and credible than what is presented in this paper. HAVEGE has issues with its expansion phase that outputs 32 bits for every bit collected. However, CPU Jitter appears to do a first class job.

    Finally, the collection loop in the paper will be optimized to nothing by any half decent compiler which means no randomness.

    [1] http://www.issihosts.com/haveged/

    [2] http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html

    1. jvroig

      Re: Does nobody do a literature search anymore?

      You actually forgot one (which was less popular, less disseminated): Maxwell by Sandy Harris: https://www.kernel.org/doc/ols/2014/ols2014-harris.pdf

      I saw all four: HAVEGE (2002 research from IRISA), haveged (Gary Wuertz implementation), Maxwell, and Jitter Entropy. I knew about HAVEGE & haveged from the start. I only learned about Maxwell and Jitter Entropy later on in the research. (Hi, I'm the cited paper author)

      The main problem I have (and other researchers too - see, for example the Maxwell paper) with HAVEGE / haveged is that it's too complex (at least perceived), and seems to require specific CPU features and tuning for archs.

      Jitter Entropy is a lot better, more recent, and actively maintained. It just does things that aren't necessary. In my view, that's why it's great for Linux, but will prevent it from scaling across all types of devices and platforms. (Also, Jitter Entropy MUST be compiled without optimization too. Stephan Mueller was pretty clear about that here: http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html)

      The conference paper pre-print is very limited in details due to the page limit. However, I write more about the paradigm, key guiding principles, and implementation design of my work in the accompanying research website: https://research.jvroig.com/siderand I also deal quickly with key differences from HAVEGE/haveged, Maxwell, and Jitter Entropy.

      Also, what else do you know works not just in C/C++ (because C/C++ has close-to-metal features that allow direct memory manipulation, like in HAVEGE/Jitter Entropy), but even in languages like PHP, Ruby, Python3, with a wealth of data? As far as I found, nothing else. Doing micro-benchmark timing aggregation is a straightforward way to guarantee platform-agnosticism, making implementations for any purpose simple and auditable.

      Also, what else works that doesn't require a high-performance (nanosecond-precision) timer? Again, nothing else that I could find - not HAVEGE / haveged, Maxwell, or Jitter Entropy. In fact, my research so far works even for an Arduino Uno, which has an extremely simple processor (16 MHz only) and a very low res timer (4-microsecond precision only), showing a collection rate of 3,000 bits per second.

      1. onefang

        Re: Does nobody do a literature search anymore?

        "Doing micro-benchmark timing aggregation is a straightforward way to guarantee platform-agnosticism, making implementations for any purpose simple and auditable."

        I look forward to your work hitting the Debian software repos. Currently I'm using haveged.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like