* Posts by flyguy959

8 posts • joined 12 Apr 2017

Storage mad lads VAST Data tell world+dog: We've just inhaled $80m. Oh, and here's how we do that 'no more tiers' thing


$/GB is key

This sounds to me most like Kaminario k2.n with the addition of pmem and QLC in a denser box. I can see the similarities to Flashblade except this is entirely disaggregated and each flasblade node is basicallly an HCI node with compute and storage and it can only scale to 75 nodes vs 1000.

But THE MOST IMPORTANT thing imo is that they can do alllll of this for the same $/GB as spinning rust with how they handle the QLC. If so, HPC, big data, even backups have no reason not to use this.

And I have a sneaking suspicion that block access is coming. Either in new types of c-nodes or the same container.

Pure Storage's would-be Data Domain killer out in March – but it's still shy about the internals


Re: Buh Bye Rubrik and Cohesity

Pure specifically mentioned this works with Veeam, Commvault, Netbackup. All software only products where you supply the backup targets. Not a direct threat to HCI backups.

Freshly baked storage: Take a pinch of Intel Skylake silicon, some flash powder, sprinkle into IBM's FlashSystem


I think it has SVC code running in it, but unlike the v9k, there's no actual svc nodes plumbed to a 900 all shoved behind a giant bezel and has added dedupe. Less wiring and complexity. This looks to me to be IBMs first real purpose built dual controller AFA. not a v storwize w/flash.

Dell EMC's PowerMax migration: Let's just swaaap out this jet engine mid-flight


Re: well it's about time

Data in place for //m series controller upgrades or to //x(which is basically a controller upgrade). Going from the old Dell based controllers on the FA-4xx series to the //m appliances was a NDM across arrays. You had to install HBAs into a reserved PCIe Slot on the new array controllers to handle the cross array data movements. There's a video series on taking a FA-4xx to a //m20 to a //m50 somewhere on youtube.

Should SANs be patched to fix the Spectre and Meltdown bugs? Er ... yes and no


Re: Safe enough - IF no third party code

Purity Run that runs a windows file server vm on your controllers is now pretty suspect. Pure has said that if you don't have that feature then there will be no need to patch.

Pure suggests dishing out intelligence to dumb storage shelves


Re: Since When Is This New?

Who's asking for this capability? Anyone asking for a scale-out architecture, aka current EMC, Nimble, IBM customers. They have to have CPU down in the shelf as a target for the NVMe protocol between controller heads and shelf anyway, why wouldn't you use it? Offloading DirectFlash management to the shelf enables more possibility for Purity Run now and scale-out in the future.

vCenter's phone-home 'customer improvement' feature opened remote code execution hole

This post has been deleted by a moderator

Proprietary: Pure sticks to flash module design, becomes a direct flasher


This is the thinking they used. See the blog post today on the Pure site about DirectFlash Enabling Software And Flash To Speak Directly and you'll see an example with the Purity/RAID 3D overhead worked in on a 9.1 TB DirectFlash Module. Usable from that is 5.23TB. So for 18.3TB DFM you get over 10TB usable. Their global average dedupe rate, which they publish live on the site under the Products-Purity section, is ~5:1. There's 20 DFMs. 10x20x5=1000.


Biting the hand that feeds IT © 1998–2021