back to article Wow, still using disk and PCIe storage? You look like a flash-on victim, darling – it isn't 2014

For generations of PowerEdge, ProLiant, UCS and other x86 servers, the future has been fairly simple: more powerful multi-core processors, more memory, more PCIe bandwidth and shrink the space and electricity needs. For example, a Gen8 ProLiant DL3603 server had 1 or 2 Xeon E5-2400/2400 v2, with 2/4/6/8/10 cores, and 12 x DDR3 …

  1. bazza Silver badge

    Old idea, except Flash is far from ideal for this purpose. Old, because this is the ultimate use of something like memristor and has been discussed in that context before. Flash is non-ideal, you still have to do wear levelling, else it wears out.

    Now if HP ever did finish off memristor, or if any of the other players in that new-memory-tech game got their act together, that would be ideal. Faster than DRAM, great, non-volatile, check, wear-free lifetime, perfect. It'd be just like a SIMM that doesn't forget, ever.

  2. Storage_Person

    > "The disk-based IO stack in an OS takes too much time and is redundant"

    Yeah, who needs all that rubbish like metadata, consistency and replication?

    The problem with persistent memory at the moment is that it requires code-level changes (see http://pmem.io/2016/06/02/cpp-ctree-conversion.html for example) if you want to have this great performance.

    A thin filesystem-like layer would lose you a bit of speed compared to writing to bare memory but give you an awful lot more back in terms of reliability and management. Persistent memory-native apps will come, but not for a while yet.

    1. Paul Crawford Silver badge

      Memory mapping?

      "an OS takes too much time"

      For many cases you can memory-map a file and as you initially access it, it gets paged in to RAM by the virtual memory system.

      Downside is the rare occasions when it is flushed back to disk (typically only if you ask for that, or by properly un-mapping closing the file). So you gain speed but lose consistency/integrity.

    2. TheVogon

      "The problem with persistent memory at the moment is that it requires code-level changes"

      No it doesn't. See https://channel9.msdn.com/events/build/2016/p466

  3. Lusty

    resilience

    This is going to push the need for resilience on DIMMS, and that's likely to be erasure coding. Unfortunately that'll then stomp on the CPU and we'll need another upgrade cycle :)

    This is like Violin when they emerged - a few niche environments will need it, the rest of us will salivate but ultimately make do with the normal technology which is still fine for running the mail server, just like the 486 box in the corner was...

    1. Pirate Dave Silver badge
      Pirate

      Re: resilience

      "still fine for running the mail server, just like the 486 box in the corner was..."

      What do you mean, "was"...?

  4. chrismevans

    Don't forget the processing effort

    Don't forget in these calculations that the CPU doesn't spend 100% of its time reading and writing data from external storage. In fact, with a decent amount of DRAM and clever data mapping, the processor might not read/write that often, depending on the application.

    Also, we have to bear in mind that when the processor does 'do work", it may be at the request of an external call (e.g. a website) or some other user interaction that takes time over the network.

    All this means the delay from storage I/O might not be that impactful, if we have enough other work to do. Hurrah for large memory caches, multi-tasking and parallel programming!!

  5. potatohead

    This is quite a simplification. There are plenty of situations where jobs are using remote storage (SAN say) to access large datasets, which can't possibly be replicated locally due to the data size. In these situations the local disk is hardly hit, and file access is normally sequential, so the criteria you are quoting (access speed, or basically random access latency) is totally irrelevant.

    I'm sure there are applications that this sort of architecture would help, but there is plenty of stuff where this is not the case.

  6. Nate Amsden Silver badge

    tiny niche

    The stuff that needs this kind of tech is a tiny niche. Been supporting ops/dev type shit for 15 years now and at really no time has something faster than SSD been even thought as a "nice to have". Current org is a typical online e-commerce business doing well over $200M/year in revenue running standard web stacks (with mysql as the DB of choice). In excess of 90% of all I/O for OLTP occurs within mysql's buffer cache. Disk I/O to the LUNs mysql lives on is tiny, really only happens for very infrequent big expensive stupid queries but far more often for log writing (binary logs, relay logs etc). Storage array I/O workload is over 90% write under 10% read. We probably peak at 15-20% of the array's I/O capacity (meaning it will never be a bottleneck in lifetime we will be using it for).

    I was at another company several years ago who, many years before that wrote a proprietary in memory database for behavioral ad targeting. We had dozens of servers with massive amounts of memory(for the time) and clunky processes for loading data into these in memory instances. While I was there they eventually decided that in memory was too expensive so they re-wrote the stack so it did some layer of caching in memory(reducing memory usage probably by 80-90%) and the rest of the data sat on a NAS platform connected to a 3PAR storage array(with SATA disks no less). So they went from in memory to NAS on SATA, and the performance went UP, not down.

    I still believed it was a flawed architecture I would of preferred them use local SSD storage (at the time I wanted them to use Fusion IO). But they architected it so it HAD to reside on a shared storage NAS pool it wasn't going to work on local storage. Too bad, missed opportunity.

    So I've seen and managed traditional mysql (and Oracle) and vmware etc as well as in memory systems across now thousands of linux systems primarily across many companies over the past 15 years, and solutions like those presented in the article I believe are very very niche(relative to the size of the market as a whole). I could see things changing if the costs get down low enough that it makes no difference in cost/complexity if you are using DIMM SSD or SAS SSD(or even PCIe SSD), but I don't see that happening anytime soon.

    I do believe there is a market for this kind of stuff just much smaller one than articles like this seem to portray.

  7. Anonymous Coward
    Anonymous Coward

    It's Time

    Periodically in this industry we get a chance to combine some new hardware and software advanes. The we jump up and down hard on the software stack to remove many of the abstractions to get closer to the (new) metal. Then the layers begin accumulating anew. It has been ever thus.

  8. This post has been deleted by its author

  9. Anonymous Coward
    Anonymous Coward

    Nambiar's Law of IO

    See http://www.odbms.org/2016/06/nambiars-law-of-io/

    Nambiar’s Law of IO

    BY ROBERTO ZICARI · JUNE 29, 2016

    Nambiar’s Law of IO

    BY Raghunath Nambiar.

    Two of of the most popular questions asked in the storage performance world are:

    – How do you estimate the IO bandwidth requirements?

    – How do you estimate the IOPS requirements?

    In other words how much IO can be processed by today’s data management and datastore platforms?

    Here is my simple rule of thumb, let me call it the “Nambiar’s Law of IO”, #NambiarsLawofIO:

    100 Mbytes/sec per processor core for every 7 years of software maturity for IO bandwidth intensive workloads. Examples: enterprise data warehouse systems, decision support systems, distributed file system based data processing systems etc.

    10,000 IOPS per core for every 7 years of software maturity for IOPS intensive workloads. Examples: transaction processing, key-value store, transactional NoSQL, etc.

    for up to 21 years, beyond which there is no significant improvement, that’s what we have seen historically.

    Examples

    Platforms in the >21 years category:

    Traditional relational database management systems (Oracle database, Microsoft SQLServer, etc):

    – Data warehouse: 24 (number of cores in today’s popular 2 socket x86 servers) x 100 (IO bandwidth that can be processed the by software) *3 (>21 years) = 7.2 GBytes/sec

    – Transactional: 24 (number of cores in today’s popular 2 socket x86 servers) x 10,000 (IOPS that can be processed by the software) *3 (>21 years) = 720K IOPS (small block random r/w)

    Platforms in the < 7 years category:

    New generation distributed datastore systems:

    – Hadoop: 24 (number of cores in today’s popular 2 socket x86 servers) x 100 (IO bandwidth that can be processed by the the software) *1 (<7years) = 2.4 GBytes/sec.

    – Cassandra: 24 (number of cores in today’s popular 2 socket x86 servers) x 10,000 (IOPS that can be processed by the software) *1 (<7years) = 240K IOPS

    https://www.linkedin.com/in/raghunambiar

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

  • DRAM prices to drop 3-8% due to Ukraine war, inflation
    Wait, we’ll explain

    As the world continues to grapple with unrelenting inflation for many products and services, the trend of rising prices is expected to have the opposite impact on memory chips for PCs, servers, smartphones, graphics processors, and other devices.

    Taiwanese research firm TrendForce said Monday that DRAM pricing for commercial buyers is forecast to drop around three to eight percent across those markets in the third quarter compared to the previous three months. Even prices for DDR5 modules in the PC market could drop as much as five percent from July to September.

    This could result in DRAM buyers, such as system vendors and distributors, reducing prices for end users if they hope to stimulate demand in markets like PC and smartphones where sales have waned. We suppose they could try to profit on the decreased memory prices, but with many people tightening their budgets, we hope this won't be the case.

    Continue reading
  • Marvell CXL roadmap goes all-in on composable infrastructure
    Chip biz bets interconnect tech will cement its cloud claim, one day

    Fresh off the heels of Marvell Technology's Tanzanite acquisition, executives speaking at a JP Morgan event this week offered a glimpse at its compute express link (CXL) roadmap.

    "This is the next growth factor, not only for Marvell storage, but Marvell as a whole," Dan Christman, EVP of Marvell's storage products group, said.

    Introduced in early 2019, CXL is an open interface that piggybacks on PCIe to provide a common, cache-coherent means of connecting CPUs, memory, accelerators, and other peripherals. The technology is seen by many, including Marvell, as the holy grail of composable infrastructure, as it enables memory to be disaggregated from the processor.

    Continue reading
  • How CXL may change the datacenter as we know it
    Bye-bye bottlenecks. Hello composable infrastructure?

    Interview Compute Express Link (CXL) has the potential to radically change the way systems and datacenters are built and operated. And after years of joint development spanning more than 190 companies, the open standard is nearly ready for prime time.

    For those that aren’t familiar, CXL defines a common, cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals. And its implications for the datacenter are wide ranging, Jim Pappas, CXL chairman and Intel director of technology initiatives, tells The Register.

    So with the first CXL-compatible systems expected to launch later this year alongside Intel’s Sapphire Rapids Xeon Scalables and AMD’s Genoa forth-gen Epycs, we ask Pappas how he expects CXL will change the industry in the near term.

    Continue reading
  • Samsung unveils 512GB DRAM CXL module in E3.S form factor
    PCIe 5.0 device hints at servers with lots of memory, maybe shared

    Samsung has unveiled a 512-gigabyte Compute Express Link (CXL) DRAM module, which awaits servers to make it sing.

    The device will ship in the EDSFF E3.S form factor – a standard most often employed in high-capacity solid-state disks (SSDs).

    E3.S is expected to replace both M2 and 2.5-inch SSDs eventually, but Samsung has acknowledged that it may be some time before servers ready to handle the device appear. That time may well be spent figuring out how to make DRAM work well in E3.S, as DRAM is faster than the flash used in SSDs. The good news is PCIe 5.0 can handle that extra I/O action.

    Continue reading
  • Why Marvell bought interconnect upstart Tanzanite
    A little bit of DDR5 in my life, a little bit of storage by my side, a little bit of compute is all I need

    Marvell Technology has announced its intent to acquire compute express link (CXL) startup Tanzanite in an all-cash deal it says will accelerate its composable infrastructure aspirations.

    Founded in 2020, Tanzanite was an early player in the emerging CXL marketplace specializing in memory expansion, tiered memory, and memory pooling technologies. At the heart of these developments was Tanzanite's smart-logic interface connector, which enables memory to be pooled across compute servers at low latencies using CXL.

    For those unfamiliar with the term, CXL is an open-standard interconnect introduced in 2019 that piggybacks PCIe to provide a consistent interface between host CPU processors, memory, accelerators, and other peripherals. Since its inception, the technology has garnered support from more than 190 vendors, including Intel, AMD, IBM, and Nvidia.

    Continue reading
  • Semiconductor sales forecast to hit $676b in 2022
    Pandemic supply chain and war in Ukraine has potential to disrupt recovery, warns Gartner

    The value of semiconductors sold worldwide in 2022 is projected to reach $676 billion, although further revisions by Gartner are a distinct possibility, such is the increased market volatility.

    Forecasting chip demand during a pandemic is something of a dark art and the conflict in Ukraine hasn't made things any easier, with two major suppliers of neon used to fabricate chips based in the war-torn country.

    Suppliers Incas and Cryoin (based in Mariupol and Odessa respectively) are estimated to produce up to 54 percent of the world's neon and both shut up shop in early March following Russia's invasion. The inert gas is used in the lithography stage.

    Continue reading
  • Samsung dethrones Intel as chip sector grows 26% in 2021
    South Korea the big winner in a year of supply struggles and continued shortages

    Despite (and perhaps because) of ongoing shortages, the semiconductor industry posted $595 billion in revenue in 2021, an increase of 26.3 percent over 2020.

    The numbers, from Gartner, also make plain the effects of US sanctions against China, whose market share fell, and which did not have a single manufacturer present in the top 10 list (sorted by total 2021 revenue). 

    The biggest news of the report was Samsung overtaking Intel in the top spot, albeit barely: Samsung's chip biz grew 28 percent from 2020 to 2021, while Intel lost 0.3 percent over the same period. Now Samsung sits atop the list with $73.2 billion in revenue and a 12.3 percent market share, while Intel nips at its heels with $72.5 billion in revenue and a 12.2 percent share of the spoils. It's a lead, but one that could easily evaporate by the end of 2022.

    Continue reading
  • PC OEMs are sitting on 10 weeks-plus of DRAM, says Trendforce
    Pandemic-induced supply chain imbalance did it

    PC OEMs are holding 10 weeks or more of DRAM inventory thanks to hesitancy of procurement departments to stock memory chips, says market intelligence firm TrendForce.

    Trendforce said this is all attributable to the pandemic: supply chain issues impeded the ability to produce and sell consumer electronics. Since companies couldn't manufacture the devices, the companies didn't stock memory chips to go in them.

    As a result, most DRAM fabs underwent an average 5.8 per cent quarter-on-quarter drop in Q4 '21 shipments to around $25bn, leading them to lower prices.

    Continue reading
  • Thanks for the memory: Samsung says DRAM, NAND profits up Q-on-Q, sales down as global supply chain bites
    Expects more stability but warns of potential fab lockdowns on road ahead

    Samsung blamed disruptions in the global supply chain for failing to meet its own guidance for DRAM and NAND shipments during final three months of 2021, nevertheless racked up a record quarterly sales at group level.

    The South Korean megacorp said Q4 2021 delivered revenue of ₩76.57 trillion ($63.8bn), up 24 per cent year-on-year, and an operating profit of ₩13.87trn ($11.6bn), up almost 5 per cent.

    Indicating the volatility in the sector, Semiconductor unit turnover was up 43 per cent year-on-year to ₩26.01trn ($21.6bn) but fell 2 per cent on the prior quarter. Similarly, the Memory division grew 44 per cent year-on-year to ₩19.45trn ($16.16bn), but fell 7 per cent sequentially.

    Continue reading
  • Blistering bandwidth: JEDEC pushes out HBM3 memory specs
    All that and improved energy efficiency

    The JEDEC Solid State Technology Association has published the official standards for HBM3 memory, the latest update to the High Bandwidth Memory (HBM) Standard.

    HBM is a high-performance memory type that uses vertically stacked memory chips that are typically mounted on the same substrate close to a CPU or GPU. There are just a few suppliers of HBM memory: SK hynix, Micron, and Samsung spring to mind.

    The HBM3 standard was designed for greater bandwidth, doubling the per-pin data rate of HBM2 generation components up to 6.4Gbps, equivalent to 819GBps bandwith per device, according to JEDEC, which is in line with the HBM3 DRAM design SK hynix announced last year.

    Continue reading

Biting the hand that feeds IT © 1998–2022