back to article Regular or premium? Intel pumps out Optane memory at CES

Intel has announced Optane memory products in M.2. format to ship in the second 2017 quarter. Optane is Intel’s brand for its 3D Xpoint memory, jointly developed with Micron, and a non-volatile memory that is faster than NAND but slower than DRAM. Intel says (PDF) “a hard disk drive coupled with Intel Optane memory affordably …

  1. Anonymous Coward
    Anonymous Coward

    I must be missing something

    Using Optane as cache for spinning rust? Talk about hype exceeding reality. Remember when Xpoint was supposed to be the SSD killer?

    1. Brandon 2

      Re: I must be missing something

      Agreed. Why would i stick 16GB (or 32GB) of anything on an M.2 slot, when I can just put a 500GB SSD on there for 150 'merican pesos?

      1. Anonymous Coward
        Anonymous Coward

        Re: I must be missing something

        "Agreed. Why would i stick 16GB (or 32GB) of anything on an M.2 slot, when I can just put a 500GB SSD on there for 150 'merican pesos?"

        Did ANYONE read the article.

        It acts as a cache between disk and main memory / processor.

        So yes, you could use your 500GB SSD for $150, but how much is that 4TB SSD again?

        1. Tom 38

          Re: I must be missing something

          Big fucking whoop, you can just use DRAM or an SSD for that. For this to be good, it's got to be either cheaper than SSD and available in higher capacities, or vastly faster. This is neither.

          What is revolutionary/interesting about a 32GB cache module that performs at the same level as an SSD?

    2. bldrco

      Re: I must be missing something

      Perhaps it's a cache for NAND SSD's?

    3. kodykantor

      Re: I must be missing something

      This really frightens me. Is this why Intel hasn't been publishing hard numbers (latency, IOPS, etc.)? Do we know the price of this? It makes me worried that Xpoint may not live up to the hype... Hopefully things will be more clear when we see the datacenter version of Optane.

      1. BillG
        Alert

        Re: I must be missing something

        I had to research this for a job assignment. According Intel's numbers, for a copy from an internal drive to an external drive. the NAND SSD to NAND SSD copy speed was 284MBytes/sec while the OPTANE SSD to OPTANE SSD copy speed was 1.93GBytes/sec.

        This was hard drive storage.

        1. Tom 38

          Re: I must be missing something

          NAND SSD to NAND SSD copy speed was 284MBytes/sec while the OPTANE SSD to OPTANE SSD copy speed was 1.93GBytes/sec.

          I suspect manufactured numbers there (by Intel), because that would be the slowest SSD I've ever seen. Good PCI-E/M2 SSDs (as they are basically the same thing) have read speeds >2 GB/s and write of 1.5 GB/s.

          It would be interesting (and I guess we will see soon enough when they actually start shipping them) to see comparisons with equivalent devices; if they're saying an Optane M2 with PCI-E is faster than SSD, it should be compared against an SSD connected over M2/PCI-E.

  2. Anonymous Coward
    Anonymous Coward

    Microsoft Surface Studio needs to drop the spinning rust too.

    Microsoft's Surface Studio is a case in point, uses a combination of 5400rpm spinning rust and an M.2 cache. A replacement SSD for the 5400rpm HDD has been shown to increase the Desktop's performance massively. At least we should be thankful you can replace the HDD, given the way Apple are taking things of late.

    Intel must think its 2012 still.

  3. rgriffith

    Is it way below the hype

    The density is low. The glossy paper compares the speed as faster than spinning disk. This does not appear in anyway to be a flash killer.

  4. Nimby
    WTF?

    M.2 cache?

    I have to admit, I really wanted to be excited ... but the more Intel does (and does not) reveal about Xpoint, the more I think someone there really missed the boat. And the dock. And the coast.

    An M.2 card that small? As just a cache? A cache to ... what? It's not like M.2 slots grow on trees. Is this new Xpoint cache to a RAID5 of plain spinning rust somehow going to be significantly better than the M.2 NAND and a RAID5 of hybrid SSHDs that I already have?

    Personally, I'd rather use high density DIMMs and give up a pair of RAM slots for said cache. At least then I'm not losing PCIe lanes that my SLI graphics need. (Or am I?) Is Intel finally going to give us a proper number of PCIe lanes for today's hardware needs?

    If Intel has a point to this product, I guess I am clearly not getting it.

    1. Mikel

      Re: M.2 cache?

      We've covered the small capacity. Now let's talk about the large chips.

      Look at those beasts. They're huge! If this is what it takes for 32GB capacity, they're going to need 6 or 7 die shrinks before this can fit a marketable amount of capacity on that board. Might as well back RAM chips with capacitors or something.

  5. Peter X

    Useful for?

    Best case I can think of is for storing a DB transaction log. Any better ideas?

    I did wonder with yesterdays Lenovo laptop article, how a laptop was a sensible place to put this right now (assuming it's price per GB will reduce over time). I'm guessing they're trying to sell to early adopters / CEOs / PHBs?

    1. Nate Amsden Silver badge

      Re: Useful for?

      My lenovo p50 has dual 512g samsung 950 pros(pcie). And a sata samsung 850 pro 1tb.

      The size of these are so small intel is really having to scrape the bottom of the ocean to find a use case.

      Most orgs will want hot swap high availability storage for their databases.

  6. redpawn Silver badge

    Thank you for your order

    Would you like a shiny new data thimble or tea spoon with that new computer.

  7. Anonymous Coward
    Anonymous Coward

    So, you're taking up a DIMM slot with something 1/5 the size and 1/10 the speed of DRAM, or you're taking up an m2 slot with something 1/30th the size and twice (or more) the cost of an SSD.

    I know I shouldn't have got my hopes up for 1TB class DIMMs at only 2-10x slower than DRAM, but one can dream.

  8. Mikel

    Add me to the chorus of WTF?

    What is this, flash storage for ants?

  9. Mark Hahn

    Intel's promise with Optane has been that it's NV and doesn't wear like flash (that is, it doesn't require a block erase whose endurance is a few hundred cycles.)

    This product is pointlessly small, and certainly no faster than the many NVMe flash products on the market. But if it's write endurance is extremely high, I guess that's a good sign. In the sense that, assuming Intel manages to make it 100x more dense, it would have a write-endurance advantage, if no other, versus flash.

    Pretty scummy of them to provide no real info, though. For instance, does it provide standard NVMe, or is it some other one-off interface? Obviously, being M.2 it's just a PCIe device, but perhaps only the Intel chipset recognizes it, and only uses it for caching.

    1. Anonymous Coward
      Anonymous Coward

      Disappointed, but there are still possible use cases

      I get all the disappointment, which I share.

      However I am starting to see some potential value add their providing in this first-gen tech. Despite lower or comparable IOPS (to SSD), vastly lower storage density, and lower or comparable sequential read/write (to SSD), it does have some characteristics that are easy to gloss over:

      * Latencies are much lower -- I can see this potentially being very beneficial for types of workloads that do need very high bandwidth. Think of database logs, perhaps swap files.

      * High IOPS @ low queue depths -- This is pretty significant. We're so brainwashed into seeing the latest SSD's with ~300K random IOPS. But if you look at most real-world workloads, queues depths are very low, typically less than 4. If you compare low queues depths of Optane (> 100K), the best class SSD's are only performing in the ~10K or so range.

      * Endurance -- Someone else mentioned this, which is also valuable to know, especially for server workloads.

      I know I held out hope that gen 1 (while not perfect), would blow away SSD's, but with a price premium of early adoption. The reality is that gen 1 will probably only optimize certain workloads, and while not exclusively be server-oriented workloads, the benefits to the mainstream not be significant. But I fully expect that future generations that break the bus barrier (moving to DRAM interface, or something new), combined with vast cost reductions and thus storage densities closer to par with SSD's, that we'll be in great shape.

      Gamers are (some of) the most demanding users. But look at every SSD review in existence, including the highest end server-grade models. Wow those amazing sequential speeds. Now, look at how much they're improving your game performance. Zilch. Nodda. Nothing. Not even load times are better. SSD's look amazing on paper, but in real world workloads, the results vary wildly.

      1. Anonymous Coward
        Anonymous Coward

        Re: Disappointed, but there are still possible use cases

        "types of workloads that do NOT need very high bandwidth"

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

  • Intel is running rings around AMD and Arm at the edge
    What will it take to loosen the x86 giant's edge stranglehold?

    Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.

    So where are all the AMD and Arm-based edge appliances?

    A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.

    Continue reading
  • TSMC may surpass Intel in quarterly revenue for first time
    Fab frenemies: x86 giant set to give Taiwanese chipmaker more money as it revitalizes foundry business

    In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.

    Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.

    The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.

    Continue reading
  • Intel withholds Ohio fab ceremony over US chip subsidies inaction
    $20b factory construction start date unchanged – but the x86 giant is not happy

    Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.

    The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.

    That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.

    Continue reading
  • Intel demands $625m in interest from Europe on overturned antitrust fine
    Chip giant still salty

    Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.

    In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.

    According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."

    Continue reading
  • Intel details advances to make upcoming chips faster, less costly
    X86 giant says it’s on track to regaining manufacturing leadership after years of missteps

    By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.

    This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.

    The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.

    Continue reading
  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • AMD to end Threadripper Pro 5000 drought for non-Lenovo PCs
    As the House of Zen kills off consumer-friendly non-Pro TR chips

    A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.

    AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.

    The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.

    Continue reading
  • Intel delivers first discrete Arc desktop GPUs ... in China
    Why not just ship it in Narnia and call it a win?

    Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.

    The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.

    Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.

    Continue reading
  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading
  • AMD bests Intel in cloud CPU performance study
    Overall price-performance in Big 3 hyperscalers a dead heat, says CockroachDB

    AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.

    The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.

    Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.

    Continue reading

Biting the hand that feeds IT © 1998–2022