back to article Penguins, only YOU can turn desktop disk IO into legacy tech

With the advent of flash-based storage memory, the prospect of banishing disk IO waits forever from transaction-based or other IO-bound server applications is close to becoming a reality. But what about desktops? We have a pretty weak example with Apple's MacBook Air ultrathin laptops, but these are underpowered little …


This topic is closed for new posts.
  1. Destroy All Monsters Silver badge

    > I want it to be faster than the speed of light

    You cannae change the laws of physics...

    But otherwise.

    1) Install flash memory that maps into the one-dimensional address space of the CPU.

    2) Tweak Virtual Memory Manager with an additional "this is flash" flag per page.

    3) Make apps aware of 2) or, barring that, just define a flash-based "RAM disk" with the appropriate filesystem and put files and memory-mapped storage there

    4) ???

    5) Sanic speed attained

  2. Justin Stringfellow


    Quark doesn't run on Linux.

    1. Jon Green

      Re: but...

      Quark are keeping a watching eye on Linux (see thread in, but it's clear they don't yet see a market there. However, it would be nice if they could at least look into resolving the incompatibilities with WINE, so that Linux users could run it semi-natively without Quark having to do a full port. I'm sure it wouldn't be a huge expense for Quark, and it would generate a lot of goodwill - not to mention first-market advantage in commercial publishing software for Linux.

      Have you raised that possibility with Quark?

    2. itzman

      Re: but...

      but scribus does...

      And a cursory glance shows it ain't half bad.

      1. Anonymous Coward
        Anonymous Coward

        Re: but...

        Scribus? Is that the open source knockoff of Quark or something?

        Quark will never do Linux as the three people who could make up the target market would never get their heads round paying "the man" for a bunch of 1's and 0's.

  3. Jon Green

    Absolutely correct.

    I replaced the head-crashed hard disk in my daughter's reasonably new laptop with an SSD. From being fairly sluggardly (but light), it's now almost as responsive as my SSD-powered Zenbook. But that's still a long way from what's possible.

    Firms like Steve Wozniak's Fusion-IO "got it" early: we don't need flash memory that pretends to be a hard disk, we need flash memory (or, better still, next-generation magnetoresistive or resistive RAM) coupled directly to the processor bus. RAM-speed reads, fast writes - in the case of (M)RRAM, near-RAM-speed writes with no "write wear" - and much lower power consumption.

    I've argued it since the late 80s: moving-parts memory belongs in the same places as those who once invented it - retirement and (sorry) fond memories.

  4. Jon Double Nice


    You poor poppet.

  5. The BigYin

    Yon inconsiderate clod!

    If my VM images and other related apps don't take minuets to load - when will I make coffee?

  6. Volker Hett


    since when is a 1.8GHz i7 underpowered for desktop tasks?

    1. reno79

      Re: underpowered?

      Having the horse at the front (Processor) running on cocaine is all well and good, if the cart (hdd) at the back has no wheels then that horse is hobbled.

      HDD's are the biggest bottlenecks in todays computers. Everything else runs at access speeds of a few ms or less, a 5400 or 7200 RPM is clearly not. Sequential read and write speeds haven't kept up with everything else and unless the actual methods behind storage change significantly (SSD was a step in the right direction but not the solution) we're forever going to live with bottleneck.

  7. Charles 9 Silver badge

    It can't be as simple as that. After all, using memory-mapped Flash also means you have to make sure it recognizes it as nonvolatile. Furthermore, the kinds of IO operations you would do as a memory map would be different from those you would as a disk analogue. The way chunks of memory are manipulated would have to be adjusted (you'd want cell-sized blocks ideally). If you use an advanced NVRAM that can be addressed even up to byte precision, then perhaps you'd want to block your IO operations into bus-aligned blocks the CPU can shuffle most easily. We have to realize there's more that needs to be handled behind the scenes than just throw the app into a memory-mapped flash array, and given that things can change from implementation to implementation, we need to allow for a little more time to shake things out.

    1. Jon Green


      It's really only a, fairly minor, extension of non-uniform memory architecture (NUMA). Don't forget that the processor doesn't need to "see" the flash as anything but a slowish RAM: you can use flash controllers to hide the complexities of block erases and rewrites. Cache RAM in front of the controller (plus a bit of power buffering and emergency-write circuitry to write back a "dirty" cache on power-down) presents a near-RAM-speed interface to the CPU .

      That said, you would get performance improvements if the processor did operate the flash knowledgeably. It really just comes down to what your operating system can handle easily. Obviously, open-source or open kernel API OSes would be easier for hardware designers to adapt than closed systems like Windows, but I'm sure Microsoft already has allowances for non-volatile main memory somewhere: I just don't happen to know those APIs.

  8. M Gale

    I've mentioned this before.

    Making a computer where there is no concept of "storage" and everything is just "memory"?

    No fucking thankyou. That might be nirvana to some people, but to someone who wants "yank the plug" to mean "forget everything and start again", it's awful.

    That said, maybe there's some use for having a flash card in there, and having the option in the OS to have a program "run in RAM" or "run in flash". Just so long as turning it off and on again means a reboot and not just coming back up in the same broken state.

    1. Destroy All Monsters Silver badge

      Re: I've mentioned this before.

      I guess one can add an option to the bootup screen whereby "zero all working memory" can be selected on an as-needed basis. This can even be pretty fast if done correctly, just mark pages as "unitialized" and have the OS read all zeros on first use..

      I wouldn't count on the "BIOS" manufacturers to get that even approximately right though.

    2. itzman

      Re: I've mentioned this before.

      this is a straw man.

      You could always have a 'go to jail, go directly to jail, erase all volatile and start with NVRAM/ROM... I mean my virtual machines are there now.

      Its far far faster to page an entire windows intsallation complete with running apps off disk than it is to boot it.

      But I still have the choice..

      1. Nate Amsden Silver badge

        Re: I've mentioned this before.

        Not for me it's not .. at least running in a VM .. for a long time I would just hit suspend in vmware to suspend my windows VM when I was going to shut down my computer for whatever reason. I recently changed to just shutting the VM down, I'd wager it's almost 2x faster than trying to write out and read in the ~2GB worth of data in memory.

        Maybe native hibernate is much more efficient I haven't tried it recently.

    3. Anonymous Coward
      Anonymous Coward

      Re: I've mentioned this before.

      Somebody has not been paying attention whilst studying Computer Architecture 101.

      Even the computers registers are memory which are used to move data in and out of the ALU.

      It is simplification that allowed Alan Turing to design and implement the worlds first practical computers.

      Running a program in flash could potentially damage the flash over extended periods of time.

      The CPU doesn’t understand anything other than memory. It has no idea of the concept of the character 'A'.

      All the CPU understands is binary and how to do additions and other calculations.

      Its the video processor that knows how to control the CRT beam to display the character 'A' for its binary form.

      You could try and develop a computer using Tri/Quad state electronics and it will get extremely complicated.

      1. Destroy All Monsters Silver badge
        Paris Hilton

        Computer Architecture 101, huh?

        > It is simplification that allowed Alan Turing to design and implement the worlds first practical computers.

        You are mixing this up with John von Neumann. Actually Turing was involved in the ACE, but that came even after the ENIAC (not to mention the Atanasoff-Berry Computer)

        > Running a program in flash could potentially damage the flash over extended periods of time

        Quite a lot of the pages are "unmodifiable" program text or even static data, so can be advantageously left in nonvolatile memory. DB "files" can. In-memory datastructures can. And of course the filesystem can. If what you are saying were true, there would not even be a use for "flash memory cunningly disguised as disk drives" even today.

        1. Anonymous Coward
          Anonymous Coward

          Re: Computer Architecture 101, huh?

          All I was trying to say is that if you simplify things you can go blazingly fast.

          The Architecture for the computer has to accommodate numerous technologies.

          The controller for the FLASH has be mapped to memory otherwise the CPU wouldn't be able to control it.

          Mapping very large areas of flash memory into Main memory will play havoc with the basic architecture that has to support thousands of technologies. Might require additional electronics.

          What will you do when some better technology comes along.

          Having said all this you could still achieve what you want by making the interface to your Flash exactly like RAM dimms.

          1. Charles 9 Silver badge

            Re: Computer Architecture 101, huh?

            Well, memory mapping is nothing new in the modern PC. Ever since the Peripheral Control Interconnect came along, we've been memory mapping on the PC. Video memory is mapped; the 64-bit memory architecture specifically provides for a peripheral memory map (because they figured no one would reach 2^63 bytes of actual RAM in the processor's lifetime--we're hanging around 2^36 at this point, so it's probably a safe bet). Mapping a few gigs of flash memory should be easy enough to do; the trick would be to do it smartly, but the flash controller can probably handle the messy details given a well-defined specification.

      2. M Gale

        Re: I've mentioned this before.

        "The CPU doesn’t understand anything other than memory. It has no idea of the concept of the character 'A'."

        You'll find I know more about the internal architecture of a CPU, whether scalar, superscalar, stream/vector or whatever, than you think. If I could really be bothered and had the cash I could probably design a simple CPU out of logic chips, transistors, or relay switches if you like. Maybe have a go at making a hardware Brainfuck/Turing machine. Slow as hell with ripple-adders, but it'd work.

        There is still a concept of "memory" and "storage", regardless of what the CPU is doing at the gate level.

        Whatever happens with regards stuffing running programs into nonvolatile memory, I just hope I retain the option to say "KILL EVERYTHING AND RESTART YOUR SHIT."

        1. Anonymous Coward
          Thumb Up

          Re: I've mentioned this before.

          "... I just hope I retain the option to say..."

          Why wouldn't you? The application to keeping this feature for you under a "RAM Only" approach is highly more feasible than actually seeing this approach becoming a reality, I wish it would though.

          The application of permanent storage started with memory, non-RAM storage only came to exist because we couldn't and still cannot feasibly have 8TB of RAM in our computers. That is what strikes me odd about this article, doesn't the author understand this and know it isn't OS specific? Until someone can say "Here is your 2TB flash ROM you ordered for $99"...this approach just won't happen.

          Ultimately I completely agree with the author though, I think it is what we all want. After all, it is the next logical progression (excluding exotic crystal based, magnetic based, blah blah approaches).

      3. Anonymous Coward
        Anonymous Coward

        Re: I've mentioned this before.

        A computer doesn't understand anything. It's merely a machine on which turning some knobs on or off has a cascade effect of other bits moving to an on or off position.

        1. M Gale

          Re: I've mentioned this before.

          Well. A bunch of logic gates certainly doesn't understand anything asides its current state.

          That said, you could say that a complex neural network in fact DOES understand what it has been trained to recognise, even if only in a protozoan sense of the word.

          So while a computer might not "understand", software well might do. Your brain doesn't understand a damned thing either, but I'm sure your mind does.

    4. Vic

      Re: I've mentioned this before.

      > to someone who wants "yank the plug" to mean "forget everything and start again", it's awful.

      If you've got enough RAM, just build a bastard great initrd and roll a known image into RAM every time you turn the power on.

      I do embedded boxes like that...


  9. James Chaldecott

    Have I missed the point?

    I think I've missed your point. I'm a Windows developer with very little understanding of how various storage technologies work, but I do know a bit about how Windows works (that's not understatement: I really just mean "a bit").

    Here's my understanding of how Windows works right now:

    "Virtual" address space is backed by both RAM and "disk" files. In the case of "Memory" the disk file is the "page file", in the case of memory mapped files it is the file in question. The most recently used pages of address space are kept in RAM, until you run out. When you access a page of address space that isn't currently in RAM, then that page is loaded from "disk" (this is a "page fault"). When you run out of RAM, read-only pages (e.g. code) are just discarded from RAM. Executable files (i.e. exes and dlls) are used as memory mapped files.

    So I *think* that means it works *almost* like you want already. When you load an exe, the exe file itself and all it's dll files are mapped into RAM as they are accessed. If your disk is fast, then this process will also be fast. Note that Windows also keeps a prefetch cache, which will make it page in the commonly accessed set of pages as soon as you start an app (instead of waiting for the page faults).

    Now I agree that (most) apps still don't start up instantaneously, even from an SSD, so I do wonder where the time goes. Perhaps it is somewhat down to the apps actually doing *work* during startup. I'm thinking JITting (for .Net apps), dll rebasing (where thunks have to be added when dlls collide in address space) and actual application code. That could be got rid of by leaving your apps running and just hibernating the PC when you walk away from it.

    So... I'm not really sure where you're saying the slow-down is. Is it the file-system code? SATA? Do you have empirical evidence? Note that I'm no expert on these things (let alone anything like Fusion-IO), I'd actually love to know.

    1. Ken Hagan Gold badge

      Re: Have I missed the point?

      With apologies to those who aren't Windows developers. You won't get the references, but you can probably guess.

      Having recently used PROCMON.EXE, I may be a little biased (or bitter) but I suspect that reading in the EXE is a pifflingly small fraction (<1%) of the time spent annoying the end-user with an hourglass ion. Before you even get as far as WinMain(), you have loaded and run the DLL entry points of several dozen system DLLs. For every one of those executables, the kernel will have crawled over the registry to see if various app-compat hooks or debugging hooks are required. If your program ever shows a file dialog box, you'll pull in all the shell DLLs, which rejoice in trawling the registry and file system picking up the current user's preference for just about everything that you can configure in Control Panel. Each of those registry accesses checks per-user and per-machine hives. Depending on how old the app is, every single one of those registry and file system lookups might be virtualised, so each registry access also checks the Wow64Bollocks parallel universe in both hives. File system accesses are similarly virtualised because you can never have too many directories pretending to be System32.

      And by the time you've done that and reached the very first instruction of the actual program the end-user wanted, your memory hierarchy is absolutely cache-busted, so everything runs like molasses.

    2. bazza Silver badge
      Thumb Up

      Re: Have I missed the point?

      @James Chaldecott

      "So I *think* that means it works *almost* like you want already. When you load an exe, the exe file itself and all it's dll files are mapped into RAM as they are accessed. If your disk is fast, then this process will also be fast."

      Nearly, just go once step further.

      The act of loading a .exe is to transfer something into memory to be executed. It involves a process not unlike linking, in that all the system call references are finally pointed to the actual system library routines, etc, etc.

      But if the application is already in CPU mapped memory there's not much point in going through this step every time you want to run the application. You might just as well do that when you install it, so 'running' the application becomes merely a matter of branching to the app's first machine op code instruction.

      We've been there before. In the old old days you could buy ROM chips to plug into your BBC micro. To run whatever they contained you just told the CPU to start reading op codes from the beginning of the ROM. And off it goes.

      With memory mapped storage the ideal of a block based file system is an artificial incumbency left over from the old days when we didn't have enough memory. The only reason the idea persists in a wholly memory mapped age is to do with OS architectures. It's a pretty big job to change how the OSes work.

      It might never happen. As other posters have pointed out there's plenty of end user benefit to be had in *not* having an app in memory, ready to go at a moments notice (security, control, etc). All of those would have to re-established if storage became the modern equivalent of plugging in ROMs, and it probably isn't worth it from a cost-benefits point of view.

      1. James Chaldecott
        Thumb Up

        Re: Have I missed the point?


        Yes, I'd forgotten about all the resolving from function references to actual pointers stuff, but you're right. I did mention the other thing: dll rebasing, where the function pointers to functions *within* the dll have to be modified if it doesn't load at it's preferred address. The developer can help alleviate that by carefully managing their dll's preferred base addresses. Not much that can be done about the "resolving references to other dlls" phase, though.

        The main issue with doing that "linking" phase at install time would be that you couldn't patch the system dlls without updating every "pre-linked" image on the system. It would also fail if any of the dlls needed rebasing at run-time. I guess it would be possible, though. Perhaps something a bit like NGEN works in .Net 4.5.

        In .Net 4.5 the system will (under certain conditions) automatically create native images for .Net assemblies if it thinks it would be advantageous. It also manages keeping them up to date if anything on the system changes, and purging the native images if they haven't been used for a while.

        Sounds like a mighty complicated & far-reaching change, though, with interesting application compatibility issues. I doubt it will happen any time soon.

        You can bet anything like that for native code would have the "Look how much disk space Winblow$ uses!" crowd up in arms, as well!

      2. Vic

        Re: Have I missed the point?

        But if the application is already in CPU mapped memory there's not much point in going through this step every time you want to run the application.

        That's fairly close to what the Unix sticky bit does on executables. It leads to ... interesting execution at times :-)


  10. This post has been deleted by its author

  11. Anonymous Coward
    Anonymous Coward

    Care in the [linux] community

    I think I speak for all of us when I congratulate el-reg on giving Eadon an article all of his own, wasn't it nice of them?

    /Anon - 'cause we were all thinking it :)

  12. Anonymous Coward
    Anonymous Coward

    Different access models

    Mapping the flash devices as used in SSDs and memory sticks into processor address space is not easy, as NAND flash really isn't random access on a word boundary. You would need some form of smart device that could convert the more block oriented access flash wants into something that could drive cache-fills on a modern processor. Even if you did that, the speed of access of flash is much less than the speed of access of RAM. You really would want to be able to page flash into RAM to run programs - so you end up with something that reads blocks of flash and puts them into blocks of RAM. In other words, a flash block device tied into the virtual memory system - which is what we have now. The only real issue is moving from SATA and such to an interface more suited to flash.

    1. bazza Silver badge

      Re: Different access models

      @David D Hagood

      That's certainly true for Flash, but not so for things like memristor memory where every bit is individually read/writable. Ok, memristor isn't here yet, though HP (yes, Hewlett Packard!) are reportedly close to coming to market.

      Memristor is quite interesting. It's fast,1 GHz, so by the time you apply the same DDR tricks to it it can be the same speed as today's memory SIMMs. It's not block erased like Flash has to be. It can scale - HP say they *could* do 1 petabit per square centimetre (HDDs manage a few gigabits in the same area). If all that comes to fruition a PC could have just one memory SIMM and no other storage of any sort whatsoever. And it's non volatile. The same goes for other technologies like phase change memory AFAIK.

      Bit of a game changer.

  13. Infidellic_

    No one else?

    Ok I'll be the first to suggest it (and possibly be shot for it)....

    why not simply mount a ramfs/tmpfs "drive" and on system startup copy necessary apps into it?

    Slower startup time but if you're an avid suspend/hibernate users then what do you care about that for

    Can I have my £1tn now? :p

  14. PaulH

    Palm Pilots

    Isn't this how Palm Pilots worked? They ran programs straight from the flash memory, instead of loading into RAM. So the memory requirement was much lower, and program startup was almost instant.

    I have time to think about this every time I select an app on a modern device and wait for it to load.

    1. M Gale

      Re: Palm Pilots

      Also how more ancient cart based games consoles worked, where the cart essentially became part of the memory map.

      1. Simon Harris

        Re: Palm Pilots

        I still have my circa 1980 Acorn Atom WordPack word processor ROM floating around somewhere.

        That mapped into and ran directly from the 4K address space from A000 to AFFF.

  15. ilmari

    I imagine a "live usb key"-on-ssd approach might work. Squashfs image of the OS, initrd script to preload all of that into ram at startup, in one sequential chunk and not in random order. Optional "persistence" mode, where saved files, changed settings, etc get added to a ext4/btrfs/f2fs partition.. The latter will of course slow you back to traditional ssd level of runtime performance..

    I'm more annoyed with applications today than operating systems. On your regular sd card or usb flash, the amount of I/O that, for example, clicking back in firefox causes is about 2 seconds of I/O busy... Why? Because databases. Everything is a database these days, and databases go to great lengths to ensure that data is on disk NOW, RIGHTAWAY, and that data is written so that no corruption or loss occurs if power is lost or drive yanked in the middle of that write. This is the perfect recipie to defeat every I/O reducing, optimizing and make-go-faster algorithms that operating systems have.

    Some people might care if a website cookie was lost, or that the visited/not-visited status of a link is comitted to disk within 1ms of you clicking the link.. Personally I'm not that fussy. Sometimes when running from usb live keys, I just copy and symlink the dot-mozilla directory to tmpfs (ramdisk) before starting firefox, and copy it back once I exit firefox. That means I avoid sqlite's disk-abusing tendencies, but with the risk that the entire browser session's history, cookies and saved forms/passwords data is lost to the state it was in before starting firefox.

    I have a feeling there should be some middle ground between the default extreme, and my hacked together extreme.

    People tell me windows has largely started to ignore requests made by apps wanting to commit data to disk immediately.. I guess they must have a new api for things that /really/ do want/need it.. Soon enough every app catches on to the new way of doing things, and we're back at slow I/O, and have to ignore requests through the new API, and create a new new API for things that really really want it. Sigh.

    1. Anonymous Coward
      Anonymous Coward

      What would be the purpose of what your suggesting? computers are fast enough already.

      What you're suggesting wont put an end to viruses it will just make them less persistent.

    2. Anonymous Coward
      Anonymous Coward

      How would you update existing software (downloaded from certificate verified site) or install new apps?

    3. david 12 Silver badge

      ignore requests made by apps wanting to commit data

      If true, not for the first time. When they went from Win3.1 to Win95 they did just that: the write-to-disk-now flag was ignored, to make applications faster. This magically made all database software alkaline.

      The work-around, which you still see in use today, is to require people to shutdown Windows before turning off the computer. Nowadays, that has been implemented in hardware: when you press the OFF button on your computer, it sends an emergancy shutdown message to Windows, which shuts down as fast as it can, while the computer power stays up for as long as it can.

      Anyway, so you are supposed to move all your database software off DOS/Windows 3.1 onto WinNT, using

      "FILE_FLAG_WRITE_THROUGH", which writes through to --- the drive controller cache. So now if you want your database to be ACID, you have to have a battery-backed hardware cache.

      But your DISK DRIVES now start caching -- so you need enterprise disk drives if you want to be sure, and your software CANNOT assume that anything smaller than the disk cache is truly written -- so you need to write everything twice: first to the log, then to the database, and on recovery you compare the two copies.

      And the bottom line is, no matter how much you cache, your database writes are limited by the speed at which you can write to the spinning disk.

  16. Jim O'Reilly

    What you need Chris are NVDIMMs

    I like the plan,but I have to tell you that the answer is nearer than you think.

    Viking Memory, Micron and others are in the process of rolling out non-volatile DRAM solutions. These let you treat DRAM as persistent memory, so that data isn't lost when power is taken off.

    There are still barriers to using them like you want. Most critical is saving the contents of cache and CPU registers when the power fails, or alternatively, figuring how to recover from their loss. There's work at MS and in the Linux community on this.

    If/when this all comes together, you'll have a machine that can update stored memory in a CPU cycle or two, without all that multi-layered creaking fileIO structure.

  17. Jon Press

    It's the fault of the applications, not the storage hardware...

    Regardless of the storage technology, your operating system is only going to page in as much code as the application needs in order to execute. The more code your application has in its working set, the longer this is going to take - and the more often that memory is going to have to be paged out to make room for other applications (and you won't be writing dirty pages to SSDs). And a single application today can have more code than an entire computer room full of old, washing-machine-style disk drives could hold - a lot of which may never be executed but is almost inseparably intertwined with code that will be.

    SSDs are not so much faster than magnetic disks that you're going to see dramatic (order of magnitude) improvements in speed just by swapping one for another. In fact, you might do almost as well by simply compressing the executable files and decompressing them on being paged in - which I presume someone would have done by now if the improvement were worthwhile. If you want dramatic improvements you need much more discipline in the way code is written in the first place - and better technology at the compile stage to identify clusters of related code (back in the grim old days before virtual memory we had to handcraft overlay trees) and the optimum set of pages to load at application initialisation.

  18. Paul Crawford Silver badge

    Why a file system...

    The problem with flash memory, which it shares with HDD in many ways, is it is block-orientated. Also the block erase operation is many orders of magnitude slower than a read. The job of organising reads and writes in to blocks is one task of a file system.

    The other, of course, is to provide the organisation of said blocks in to logical entities as files, and to do so in a manner that is reasonably fault tolerant of (partial) media failure or unexpected crash/power-off events.

    Until someone has NV memory that can be word-addressed for writing without the erase speed penalty and the limited write numbers of flash, it still makes sense to treat 'storage' and 'memory' as different things. You could have fancy RAM caching over flash with battery back-up so it can be committed to NV storage on an unexpected event, but I guess we already have that with a laptop's suspend feature.

    1. Paul Crawford Silver badge

      Re: Why a file system...

      Of course there is also other tasks such as security & auditing, etc, that a file system currently performs. The memory management of most CPUs can also enforce access control but there is still some need for a structure & metadata to match.

  19. Mikel

    Should be trivial

    Modify the kernel to make a RAMDISK, mirror all the disk into RAM and sync. You'll be wanting a beastly amount of RAM, but your gear should fly like the wind.

  20. Daniel von Asmuth

    Extended Memory

    PC-DOS to the rescue!

  21. Anonymous Coward
    Anonymous Coward

    Just because a Macbook Air has a slower CPU doesn't mean that SSDs have to run with such a CPU. These are ultraportables and battery life, size and weight result in a lower speed CPU.

    The whole reason HDDs are still popular is the capacity and price.

  22. J.G.Harston Silver badge

    Prior art

    RISC OS did this back in 1987 with DeskFS and ResourceFS. A !Run stub would effectively say program_counter=address_of_file_in_ROM, and that was it.

    1. M Gale

      Re: Prior art

      Yep, and so did the BBC Micro that preceded RISC machines.. and as mentioned above, so did earlier Acorn machines that the BBC evolved from. In fact many early micros relied on a ROM to contain their OS code.

      Difference is, these are ROM chips. Okay, maybe some of them might have been EEPROM or UVPROM, but it's not like the machine you plugged them into had write access. The software was also vastly smaller than today's wares.

      Whether a modern machine can be made with one big flat non-volatile memory space that holds everything, without being a potentially unbootable nightmare in the event of a crash, or a potential security risk... well, I think that's what the article is asking Penguinistas to have a go at finding out.

  23. Anonymous Coward
    Anonymous Coward

    This Flash obsession of yours is starting to make you say silly things Chris.

    Replacing DRam (12800MB/s) with Flash (15 MB/s) isn't going to make your computer faster.

    Syncing (I suppose mirroring in effect) DRam (12800MB/s) to Flash (15 MB/s) isn't going to work very well either.

    The reason your computer only takes 30 seconds to boot is because cause it takes all the code from the slow storage (that's the Flash storage) and moves it into fast storage (that's the DRam) before executing the instructions.

    If it was doing it all ( access to bits and execution of bits) at 15MB/s it might take a little longer than 30 seconds.

    If you want fast you're going to have to stump up the money for a battery backed up DRam storage solution, Flash isn't fast enough.

    1. M Gale


      Y'know, we already have flash-cached spinning disks.

      What about DRAM-cached flash cache on a spinning disk? Add in a few 1-farad capacitors to provide emergency "dump to flash" functionality in the event of a power failure (yes, they can be made smaller than the giant tin cans you get in car audio systems).. this is either a really perverse idea or a really good one.

      Or perhaps both.

      1. bazza Silver badge
        Thumb Up

        Re: Hm.

        @M Gale,

        I like the idea! Your Flash cached hdd will already have a RAM cache too. The only thing missing is the supercap UPS, and one could probably solder one of those on oneself. I might just try it myself.

  24. ilmari

    DRAM cache is what the OS does. Throw in the preload utility, and the kernel will get hints to use idle I/O time to preread in disk contents. I wish it could go further than just reading in executables and libraries though, if it senses you've got ludicrous amounts of ram.

    As for execute-in-place (XIP), due to the block nature of nand flash, xip only works on NOR flash, which is horribly expensive and doesn't exist in anything with a faster cpu than your fancy wristwatch.

  25. Pete Wilson

    Extremely Prior Art

    Multics, started in 1964, implemented a 'single level store'. No distinction between 'data in memory' and 'files'.

  26. STrRedWolf

    Haven't been monitoring things, have we?

    Phoronix published results of the Flash Friendly File System (aka F2FS) that's bundled in the Linux 3.8 kernel. Compared on SSD, SD cards, and even USB flash drives against EXT3, EXT4, XFS, and even BTRFS... F2FS improved performance in most cases. In some cases it even beat exFAT and NTFS.

    I'm waiting for it to stabilize on 3.9 though.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2022