back to article Up the stack with you: Microsoft's Denali project flashes skinny SSD controllers

Microsoft has lifted the lid on Project Denali, its quest to bring flash drive cost down a peg by extracting costly extra software from SSDs and running it in the host server to gain cost and performance efficiencies. It and co-developer CNEX Labs presented details about the project, along with its SONIC open switch networking …

  1. pixl97

    > extracting costly extra software from SSDs and running it in the host server to gain cost and performance efficiencies

    Oh yea, I remember this years ago. WinModems. If you ever had to support dialup you'll remember what kind of crap they were. Oh, yea, WinPrinters! Hmm, WinSSD sounds absolutely terrifying if history is anything to go by at this point.

    1. Voland's right hand Silver badge

      Hmm, WinSSD sounds absolutely terrifying if history is anything to go by at this point.

      Yeah. How is that going to work for a boot disk in the first place? It will not. So we are looking at 2 disks in each machine (at least). I do not see how this can reduce costs.

      1. Levente Szileszky

        Huh? Via UEFI.

    2. katrinab Silver badge

      Alternatively, zfs vs hardware raid controllers. zfs is far superior.

  2. Andy Mac

    Hang on

    Wasn’t Denali the code name for SQL Server 2012? Are they really that short of code names?

    1. Steve Knox

      Re: Hang on

      Yeah, but storage guys never listen to database guys.

  3. Stuart Halliday

    Here we go again...

    Gee do these companies never learn?

    Don't do it.

  4. LeoP
    Paris Hilton

    Next attempt at lock-in

    What's a WinModem? A WinPrinter? They are

    - cheaper than the true thing (aka. the beancounter "wins")

    - can easily be made useless by just not creating a driver for the next version of WIndows (the OEM "wins")

    - (mostly) don't work on anything else than windows (Microsoft "wins")

    Can anyone remind me, who were the losers in that game?

    Paris, because she definitly is a winner.

  5. Nate Amsden Silver badge

    maybe those hyperscalers will make their own SSDs too

    Buy the flash direct from the manufacturer and make(or at least design) whatever ssd config you want if the bulk of the logic is higher up in the stack.

    It seems Pure and IBM and HDS have been doing this at least(making their own SSDs), so not too uncommon.

  6. talk_is_cheap

    I guess the Linux world will have fun with this.

    By the time that the zfs team and one day an improved Btrfs team start to develop to arrays of such SSDs we may/will see some nice advantages from moving parts of the stack to the OS and main memory. As for Microsoft, all I would expect is a lot of work to tie systems to their OS while they would not make the investment in their file systems to take full advantage.

  7. Steve Chalmers

    Full Circle, and control of tail latency

    Amusing to think that when I started in computing 40ish years ago, disk drives were addressed by cylinder-track-sector and the operating system was responsible for bad blocks, address mapping, and the like. We, ummm, kinda got away from that because when servers went from 2 or 3 disks to 200 or 300 disks in the 1990s the higher level abstraction SCSI provided was a relief, and oh by the way that OS code became a single point of failure (and never handled multiple concurrent disk failures well anyhow).

    Of course, abstracting a lot of that detail into a disk drive got us a tail latency problem (my favorite was thermal recalibration of a drive, where the app was running just fine and then out of the blue one of the drives decided to take the better part of a second for internal housekeeping, and the app can wait thank you, in an era when server OS'es tended to blue screen if disks they depended on went away for mere seconds). And SSD's just magnify the tail latency, in an era where applications are far less able to tolerate it. Hence a server, with a handful of embedded SSDs, wanting to onload anything that causes tail latency back where it can be understood and controlled.

    There be dragons here: I spent much of the last couple of years of my career helping out thinking through how Gen-Z ( would be used by the combination of servers, storage, and networking in the data center. The best and highest use of byte addressable storage class memory, once the write endurance of parts is 10^15ish rather than 10^9ish is to allow applications to read and write persistent memory directly (through hardware address mapping and protection, in the style of the way server memory has been protected for the last 30 years) rather than through a storage stack. (1) Creating a requirement for a kernel crossing to read or write persistent memory risks making the OS king of the hill while wear leveling is still required, and ossifying an obstacle to a massive performance change once write endurance improves; and (2) I remember all the code in OS's "SCSI Services" layers and the like running elevator algorithms to reorder I/Os to optimize IO/sec by limiting disk seeks...still consuming CPU cycles in the 2000s when disk arrays had had very good caches for decades. What a waste of path length and CPU cycles (which of course was finally optimized out for NVMe).

  8. sturdy1234

    Putting the garbage collection, wear leveling and back block mgmt

    in the server OS is a no brainer. Trying too cram that huge complexity in

    a low powered SDD controller is just an exercise IMHO of beating yourself.

    The vendor argument on "up take" is all bs, since the software development

    complexity and testing is mind numbing on those embedded controllers.

    Going with after brain dead hardware for the controller was going to

    be adopted by anyone trying to get to market quickly. The cloud vendors

    do NOT want to have to wait for say SamSung to debug there firmware or

    fix bugs at a glacial pace.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like