back to article Rowhammer RAM attack adapted to hit flash storage

It's Rowhammer, Jim, but not as we know it: IBM boffins have taken the DRAM-bit-flipping-as-attack-vector trick found by Google and applied it to MLC NAND Flash. Google's Project Zero found Rowhammer in 2015, when they demonstrated that careful RAM bit-flipping in page table entries could let an attacker pwn Linux systems. …

  1. Anonymous Coward
    Anonymous Coward

    Holes in the Armour?

    Though they didn't extend this to a browser based attack, the mere suggestion that it might be possible is troubling. We can already unwind ASLR in JavaScript, and now there's a hint that a file system might be abused too. Some out there will now be chipping away at this one too.

    I fear that there might one day be an exploit out there that is hard to defeat without chopping off a load of functionality. I don't think that it's likely at the moment, but we're currently heading towards such a thing, not away from it.

    I sense that the approach taken so far by the webby world is to consider any conceivable exploit to be fixable with software changes. This might turn out to be incorrect. If that does indeed happen, JavaScript and HTML5 suddenly become very bad ideas - Java and Flash plugins bad. And an awful lot of things break really badly. It's probably time someone started working on a Plan B.

    If this does go wrong in a big way, the whole philosophy of client side execution of random JavaScript loaded from anywhere on the web would have to be reconsidered. We've had two previous goes at it - Java and Flash - and they didn't work out. Our third go, JavaScript, is clearly our best effort yet. If that does become a liability, we should take that as a hint that client side execution of random code from t'web is fundamentally a bad idea.

    That would leave us with server side execution. That would be highly unpalatable. But it might be our only way forward.

    Anyone willing to have XServer frames in a browser?

    1. Wulfhaven

      Re: Holes in the Armour?

      Client side execution of unknown code from an unknown party is always, and will always be, a bad idea.

  2. hmv

    "ECC in newer RAM"

    What was wrong with ECC in older RAM? Not quite sure how long ECC memory has been around, but it's been at least as long as my working career (which is far longer than I like to think about). And the half-arsed parity solution has been around since at least the early 1960s.

    1. bob, mon!

      Re: "What was wrong with ECC in older RAM?"

      In a lot of personal computers post-IBM-PC, the RAM didn't use ECC or even parity bits, because it was cheaper not to (and "it's only a personal computer").

      I had a Zenith system (Z-150) with a jumper on the motherboard, that you could set if you wanted to buy and install the additional memory chips to support parity. The default was "no parity memory", though.

    2. phuzz Silver badge

      Re: "ECC in newer RAM"

      16GB kit of desktop (non-ECC) DDR3: £111

      16GB kit of server (non-ECC) DDR3: £120

      16GB kit of server (ECC) DDR3: £150

      So price is definitely an issue. I'm not sure how much adding ECC support to a motherboard costs.

      1. Anonymous Coward
        Anonymous Coward

        Re: "ECC in newer RAM"

        Well you usually need +8 bits - ie x72 instead of x64 ... but the reality is that most folk don't care about ECC and DIMMs are a commodity

  3. ptribble

    Just verify

    It's 2017, why do people still use filesystems that don't cryptographically verify the validity of the data read from disk?

    1. Anonymous Coward
      Anonymous Coward

      Re: Just verify

      For many systems, they're too old, resulting in significant performance penalties. For them, getting it right isn't worth it if it doesn't come quickly enough or perhaps at too high a cost in other areas of the system.

      Maybe the next step will be error-detecting and perhaps error-correcting features built into storage devices at the block level to remove this penalty, but the overhead in terms of costs, etc. may not be palatable just yet.

    2. Ken Moorhouse Silver badge

      Re: cryptographically verify

      I would think that a MITM attack -similar to a Rootkit in principle - would defeat that. Trouble is that in making code versatile to meld with different I/O scenarios, hooks may be exposed which can be hijacked to do something additional to what they were designed to do.

  4. Tom 38


    So a modern checksummed file system with end to end data integrity like ZFS and (I guess) btrfs almost certainly would not be affected.

    1. jabuzz

      Re: Ext3

      An extent based file system would be immune too, so the 20+ year old XFS or the rather newer, exists only for Lustre ext4 would be immune. Does anyone use ext3 in 2017 outside legacy requirements? Well I guess lots of RHEL6 based installs probably do, but how many of those are on SSD? Certainly leaves me in the clear on my servers.

  5. Anonymous Coward
    Anonymous Coward

    Google Didn't find it

    Google didn't find it, they just announced it ...

    1. TeeCee Gold badge

      Re: Google Didn't find it

      Odd really, considering that Google's core business is, er, finding things...

      1. Ken Moorhouse Silver badge

        Re: Google's core business

        I think they've moved on a bit since those times.

  6. razorfishsl

    Hate to rain on their parade.

    But this was discovered back in 2008, micron , (of SSD chips fame) have a set of test programs up on GIT I think.

    Specifically designed to do this.

    then there is the NASA paper "Disturb Testing in Flash Memories"

  7. JulieM Silver badge

    How Big a Deal is this Really?

    This is an attack that ahould not be possible in theory; it is only because real-life electronic devices fall short of the ideal that accessing one memory location can influence a different memory location.

    Also, it is highly dependent on the layout of memory. The bit you're allowed to touch has to be located correctly relative to the bit you want to alter. On a modern system, with who knows what processes running, that is a lot easier said than done. SSDs are continuously remapping sectors for the purpose of eveninh out wear and tear, and a bunch of repeated writes to the same logical location might end up spread across different physical locations. And that's without mentioning different memory allocation strategies designed to mitigate against attacks.

    Memory that is less vulnerable to misreads generally -- which everybody agrees is A Good Thing -- is bound to be less vulnerable to deliberate attempts to induce misreads. So newer systems than tested, running a random mix of software, will be harder to attack.

    So I'm not sure we have much to worry about just yet.

    1. Ken Moorhouse Silver badge

      Re: a bunch of repeated writes to the same logical location...

      With your implied mention of wear-levelling I would go so far as to say:-

      ...will end up spread across different physical locations.

      In some configurations it is usual for even "read-only" data on a SSD to be shunted around periodically..


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like