I must be missing something
Using Optane as cache for spinning rust? Talk about hype exceeding reality. Remember when Xpoint was supposed to be the SSD killer?
Intel has announced Optane memory products in M.2. format to ship in the second 2017 quarter. Optane is Intel’s brand for its 3D Xpoint memory, jointly developed with Micron, and a non-volatile memory that is faster than NAND but slower than DRAM. Intel says (PDF) “a hard disk drive coupled with Intel Optane memory affordably …
"Agreed. Why would i stick 16GB (or 32GB) of anything on an M.2 slot, when I can just put a 500GB SSD on there for 150 'merican pesos?"
Did ANYONE read the article.
It acts as a cache between disk and main memory / processor.
So yes, you could use your 500GB SSD for $150, but how much is that 4TB SSD again?
Big fucking whoop, you can just use DRAM or an SSD for that. For this to be good, it's got to be either cheaper than SSD and available in higher capacities, or vastly faster. This is neither.
What is revolutionary/interesting about a 32GB cache module that performs at the same level as an SSD?
This really frightens me. Is this why Intel hasn't been publishing hard numbers (latency, IOPS, etc.)? Do we know the price of this? It makes me worried that Xpoint may not live up to the hype... Hopefully things will be more clear when we see the datacenter version of Optane.
I had to research this for a job assignment. According Intel's numbers, for a copy from an internal drive to an external drive. the NAND SSD to NAND SSD copy speed was 284MBytes/sec while the OPTANE SSD to OPTANE SSD copy speed was 1.93GBytes/sec.
This was hard drive storage.
NAND SSD to NAND SSD copy speed was 284MBytes/sec while the OPTANE SSD to OPTANE SSD copy speed was 1.93GBytes/sec.
I suspect manufactured numbers there (by Intel), because that would be the slowest SSD I've ever seen. Good PCI-E/M2 SSDs (as they are basically the same thing) have read speeds >2 GB/s and write of 1.5 GB/s.
It would be interesting (and I guess we will see soon enough when they actually start shipping them) to see comparisons with equivalent devices; if they're saying an Optane M2 with PCI-E is faster than SSD, it should be compared against an SSD connected over M2/PCI-E.
Microsoft's Surface Studio is a case in point, uses a combination of 5400rpm spinning rust and an M.2 cache. A replacement SSD for the 5400rpm HDD has been shown to increase the Desktop's performance massively. At least we should be thankful you can replace the HDD, given the way Apple are taking things of late.
Intel must think its 2012 still.
I have to admit, I really wanted to be excited ... but the more Intel does (and does not) reveal about Xpoint, the more I think someone there really missed the boat. And the dock. And the coast.
An M.2 card that small? As just a cache? A cache to ... what? It's not like M.2 slots grow on trees. Is this new Xpoint cache to a RAID5 of plain spinning rust somehow going to be significantly better than the M.2 NAND and a RAID5 of hybrid SSHDs that I already have?
Personally, I'd rather use high density DIMMs and give up a pair of RAM slots for said cache. At least then I'm not losing PCIe lanes that my SLI graphics need. (Or am I?) Is Intel finally going to give us a proper number of PCIe lanes for today's hardware needs?
If Intel has a point to this product, I guess I am clearly not getting it.
We've covered the small capacity. Now let's talk about the large chips.
Look at those beasts. They're huge! If this is what it takes for 32GB capacity, they're going to need 6 or 7 die shrinks before this can fit a marketable amount of capacity on that board. Might as well back RAM chips with capacitors or something.
Best case I can think of is for storing a DB transaction log. Any better ideas?
I did wonder with yesterdays Lenovo laptop article, how a laptop was a sensible place to put this right now (assuming it's price per GB will reduce over time). I'm guessing they're trying to sell to early adopters / CEOs / PHBs?
So, you're taking up a DIMM slot with something 1/5 the size and 1/10 the speed of DRAM, or you're taking up an m2 slot with something 1/30th the size and twice (or more) the cost of an SSD.
I know I shouldn't have got my hopes up for 1TB class DIMMs at only 2-10x slower than DRAM, but one can dream.
Intel's promise with Optane has been that it's NV and doesn't wear like flash (that is, it doesn't require a block erase whose endurance is a few hundred cycles.)
This product is pointlessly small, and certainly no faster than the many NVMe flash products on the market. But if it's write endurance is extremely high, I guess that's a good sign. In the sense that, assuming Intel manages to make it 100x more dense, it would have a write-endurance advantage, if no other, versus flash.
Pretty scummy of them to provide no real info, though. For instance, does it provide standard NVMe, or is it some other one-off interface? Obviously, being M.2 it's just a PCIe device, but perhaps only the Intel chipset recognizes it, and only uses it for caching.
I get all the disappointment, which I share.
However I am starting to see some potential value add their providing in this first-gen tech. Despite lower or comparable IOPS (to SSD), vastly lower storage density, and lower or comparable sequential read/write (to SSD), it does have some characteristics that are easy to gloss over:
* Latencies are much lower -- I can see this potentially being very beneficial for types of workloads that do need very high bandwidth. Think of database logs, perhaps swap files.
* High IOPS @ low queue depths -- This is pretty significant. We're so brainwashed into seeing the latest SSD's with ~300K random IOPS. But if you look at most real-world workloads, queues depths are very low, typically less than 4. If you compare low queues depths of Optane (> 100K), the best class SSD's are only performing in the ~10K or so range.
* Endurance -- Someone else mentioned this, which is also valuable to know, especially for server workloads.
I know I held out hope that gen 1 (while not perfect), would blow away SSD's, but with a price premium of early adoption. The reality is that gen 1 will probably only optimize certain workloads, and while not exclusively be server-oriented workloads, the benefits to the mainstream not be significant. But I fully expect that future generations that break the bus barrier (moving to DRAM interface, or something new), combined with vast cost reductions and thus storage densities closer to par with SSD's, that we'll be in great shape.
Gamers are (some of) the most demanding users. But look at every SSD review in existence, including the highest end server-grade models. Wow those amazing sequential speeds. Now, look at how much they're improving your game performance. Zilch. Nodda. Nothing. Not even load times are better. SSD's look amazing on paper, but in real world workloads, the results vary wildly.
Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.
So where are all the AMD and Arm-based edge appliances?
A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.
In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.
Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.
The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.
Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.
The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.
That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.
Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.
In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.
According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."
By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.
This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.
The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.
The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.
The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.
SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.
A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.
AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.
The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.
Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.
The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.
Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.
Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.
Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.
Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.
AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.
The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.
Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.
Biting the hand that feeds IT © 1998–2022