Re: There was always a near monopoly on encyclopedic knowledge
He said his Brittanica was unread so it looked nicer, didn't he?
56 posts • joined 27 Feb 2008
okay, explain to me how you do replication with SEDs and maintain encryption end to end, which is supported by this technology according to this article.
Answer - you cannot.
This sounds more like Nimble's encryption, which eschews SEDs for the AES offload engine in the newer Intel Chipsets and actually has a key management system, and can also maintain encryption to a replication target end to end.
SEDs have other drawbacks but they do allow a vendor to check the box for "Encryption at rest" and for many, that's enough.
While you should never, ever use an absolute when making a statement, I can think of many businesses would better serve their customers if they let some of their employees go.
But seriously, the VC world is all about risk and reward. If you work at a tech startup, you risk this sort of thing happening to you. You take the risk because you believe that the company or technology will have some sort of successful exit. Most won't.
Slaves have job security.
The Cap'n Crunch is a dead giveaway - those are some of San Jose's best doughnuts. Obviously the red carpet was rolled out for El Reg.
Re: the rest of the platform - I read the article twice and still cannot find the disruption here. "Job based" vs. "Policy based" is marketing fluff as both approaches basically stem from the same workflow - you built your jobs around a defined policy or SLA. The only difference is what order you do them in.
Data protection mechanisms can be evaluated largely on the basis of data integrity and speed of recovery. Reading the article a third time, I still cannot find a whole lot of information where either of those things are dramatically improved by this solution over current solutions on the market. This is not intended to be a knock on the product or company, but maybe marketing needs to figure out how to communicate their differentation...or fix the press kit.
A lot of smart engineers can come up with a lot of great solutions. The thing is, not all of the solutions have problems associated with them.
Solidfire is an interesting product and technology. They entered the AFA market early with a differentiated approach - its a rough market and a couple others have sucked all the air out of the room for an IPO. So they go the acquisition route by a company with a tremendously poor track record for acquired technology and a horribly fragmented sales and marketing approach to anything that is not ONTAP.
In my opinion, this sounds more like a PumpnDump rumor to try to get NTAP stock a boost, (like the one a couple months ago about Cisco acquiring Nimble) since their sales performance sure won't be doing it. It would be a shame for the guys who have gotten SolidFire this far to have to exit with a NTAP acquisition.
Oh, BS. You can use a .22 for self defense....and it is a smarter weapon. True - it makes tiny holes and probably won't kill anyone (is that a bad thing?). But the report for a .38, 9mm, or whatever is going to leave you deaf at the moment you first pull the trigger - and if you missed the first shot in a darkened room - you are both blind and now deaf other than that ringing in your ears.
If I open up with a .22, I can still hear (a little) and I am pretty sure that whoever I am unloading on is at least going to have to step back and regroup a bit, and probably consider alternatives to hanging around while I plink at their knees and elbows. Minimal recoil means I am very accurate, one handed if need be.
You should not be looking for deadly force in a self defense weapon. You look to defend yourself.
Dell or anyone touting raw capacity is helping no one. Usable or effective capacity is what should be focused on. Those numbers can get a bit soft based on the nature of a particular data set. For example - if you have a database, I hope you don't expect big de-dupe numbers, because if you're database de-dupes well, you probably just have a crappy database.
By factoring typical numbers for de-dupe and compression, as well as file system and RAID overhead, you should be able to come up with an effective capacity for a particular use-case - but it may be a fantastical fiction for another shop next door.
Great. Someone knocks one the door. Your plan goes into action..."sorry guys, it's encrypted and I destroyed the key".
Then an extended stay in A windowless room because they don't believe you.
I like the methods listed because even a jackbooted thug could look at a hard drive punched through with nailholes or melted with thermite and figure out that you really cannot get the data off it.
This is an interesting investment strategy. According to crunchable, Panantir has taken a total of 1.6 BEELLION dollars from 11 investors. In-Q-TEl's share is undisclosed. Share dilution is crazy at that point - I can't imagine what an IPO would need to look like but it would be extremely volatile at that valuation.
The investors must have some other way of getting a return on this investment...
I think what you meant to say above was that Nimble does not have a FLASH ONLY device -which is true. We think keeping snapshots on the most expensive storage possible is not an awesome idea when they can be kept more efficiently on big, fat, cheap 7200 RPM platters. To a large degree, the same should be said of any data not currently "hot for the application"
Most of the rest of the comments are vitriolic and maybe bit confusing - ONTAP and WAFL handle data very well, but the basic file system underpinnings are built for spinning media. To change those parameters would not be trivial, and they could easily turn into tremendous potential for data loss to migrate to a flash optimized architecture. After all the 7-mode/C-mode pain, do your think your customers would welcome another "All we have to do is reinitialize the array" story.?
Then again, you may be right. EMC still seems to selling XTremeIO after all...
But if you think that E-series and flash only is the way to go, you might want to Google "Nimble Adaptive Flash Challenge". Nimble will give your customers and prospects an Apple Watch just to evaluate our Adaptive Flash approach side by side against ANY Flash Only player.
And in closing, I never meant to imply that you or anyone at NTAP were scared - I just said that I would not want to be in your position. Offer is open if you;re in the Bay Area and want to learn more about Nimble. I'll even buy lunch - I have been eating yours quite a bit lately.
Disclosure: Nimble Storage guy here.
My grandmother is also "Highly Mature" - that doesn't mean I want her driving my Porsche. I don't envy your position, Dmitri, ONTAP is a good technology with a very good story but was designed to work with spinning media -truly integrating and optimizing Flash into that structure is really not feasible due to the different internal nature of the media - if it took ten years to figure out how to make C-mode work, re-architecting WAFL to accept a Flash style page size will just never happen, so you need a different platform. So, then you have these other two things - one a future that might appear late to market, the other a spindle bound dinosaur with an extremely limited feature set.
Lots of good folks from Sunnyvale are finding a nice new place to work in San Jose. Let me know if you're interested in learning more.
Nimble employee here...
I sort of have to agree with the AC above calling this marketing tripe - sort of. The NV layer only becomes a bottleneck at extremely low latency rates. The change from the PCIe based NV layer to the NVDIMM saw write latency on Nimble reduced from ~.67 ms to ~.33 ms latency. The previous gen was already great for performance. The .34 ms reduction is not going to have much impact for most applications as it was already screaming fast - if you have an application that can benefit from a write latency improvement measured in microseconds - well, you're an interesting bloke, or at least have an interesting app.
The switch to NVDIMM did free up a slot on the PCI bus, allowing now up to 6 ports per controller - 10 GbE or 16 Gb FC. That was more impactful. But the switch came at the same time the new platform launched, taking Nimble from Nehelam family processors all the way to Ivy Bridge. Other internal changes enabled by the new chipset had an impact in the latency reduction as well.
In answer to the question - "Is it enough?" - well, no it isn't. But couple the NVDIMM with new micro-architecture and chipset features, more cores and RAM, and Nimble did manage to reduce latency and increase I/Ops to over 120,000 where the previous generation topped out about 70,000. So, yes, the controller is the bottleneck, not the NV layer.
Nimble never bothered doing a special press release regarding the change to NVDIMM in the second generation - it's just one component. A competitor saw fit to announce that they were planning to make a change to NVDIMM and tried to make a big splash with it. Nimble employees like myself felt compelled to point out that this was not a new innovation on the competitors part.
Nimble uses NVDIMM for the NV Layer in all the latest Generation product, which has been shipping for several months now. Write latency was reduced with the change, but even the NVRAM card on the PCI bus that the previous generation used (very similar to the NTAP architecture mentioned above) delivered sub millisecond latency.
The switch to NVDIMM frees up a PCI slot that can be used to add additional front end ports and also so far has been easily as rock-solid as the legacy PCI Flash solution it replaced.
OK, Yeah, I get it. The idea of referring to it as Adaptive Cache really applies more to the ability to increase the ratio of the Cache layer relative to the Storage layer. With the AFS, you can change the ratio dramatically if needed, and because of the Infosite telemetry data, any recommendation or decision to increase the cache ratio is based on hard numbers, not any sort of guess work. We had some ability to do that by changing the four SSDs in the head unit to larger capacity units, but that only scales so far.
I think "Adaptive Flash" isn't a bad way to describe that new scaling dimension at all, as marketing stuff goes...I have heard worse in this biz..
Oh, and apologies to El Reg if that eye chart line card was supplied by Nimble - a CS-700 supports SIX shelves + 1 AFS in Max configuration, so a 4xCS-700 scale out config would support a potential of 24 expansion shelves and 4 additional AFS devices.
*Disclaimer - Proud Nimble Employee - but not in Marketing
This post has been deleted by a moderator
Virtual arrays aside, and back on topic. Storage management tools don't need to SUCK. The problem is there insistence of an Enourmous Margin Corporation to put a line item on every single thing. Then again, if a customers willing to pay $10,000 for a SSD, they probably will cough up a few grand to support it....
Nimble Storage Infosite:
Cloud based telemetry of your entire storage ecosystem. Proactive modeling of the telemetry data so that problems or potential problems are identified weeks or months before they impact production. Performance, capacity, cache and cpu utilization and latency statistics delivered on a per volume basis. Even difficult to identify problems such as block misalignment can be identified.
Exportable results and even an executive summary area to help quickly justify when you do need to grow the environment. And more - and no agents needed for any of the telemetry.
All included as what we consider "Basic support"
Not all that ironic. Nimble was founded in part by Varun Mehta - who was employee 11 at NetApp. WAFL was very good stuff when Varun and his team built it - but it was constructed in a different age. It is not perfect but it is very good. CASL was written for the state of the world today - multi-core CPUs, large geometry disks, and flash media. WAFL had none of those advantages, so it is the superior file system if you rely on mid 90's hardware architecture.
Thank you so much for your opinion into the direction Nimble should take. Many large shops are on FC and have no desire to change. FC vs. iSCSI isn't as much as a performance discussion as it once was - Nimble delivers <1ms latency on iSCSI now - but sometimes we do need to addressees the top three layers (layers 8-10) of the OSI model.
For those that only are familiar with the classic seven layer model:
Layer 1: physical layer
Layer 2: data link layer
Layer 3: network layer
Layer 4: transport layer
Layer 5: session layer
Layer 6: presentation layer
Layer 7: application layer
The full model adds the business environment that solutions must exist in:
Layer 8: political layer
Layer 9: financial layer
Layer 10: religious layer
As a company gets into bigger and bigger shops and opportunities, Layers 8-10 can become barriers to entry. A classic example is the storage admin who will not allow iSCSI because IP solutions mean engaging the network team, and the storage guy thinks the network guy is a tool (Layer 8). Or the company has an existing FC implementation that it wants to leverage (Layer 9). Or the DBA insists he must have FC for mystical reasons (Layer 10)
I agree, at <1 ms latency, FC is irrelevant for most shops and iSCSI will work just fine. Adding file level protocols, on the other hand, opens a new can of worms. We have many customers deploying Nimble for file services, and the choice has usually been a native gateway server (physical or virtual) for the protocols needed. In particular, Windows 2012R2 has got REAL potential for this usage.
Call me crazy, but not until you've tested it...With well deployed multipathing on a 10 GbE iSCSI network and a Nimble CS-400 device behind it - I've seen performance that tripled the incumbent "Multi-protocol" solution that cost 5x as much as the new solution...and that solution was from one of the major players well known for their fantastic NFS product.
That customer runs on Nimble now.
Nimble Employee here. NOT anonymous.
I will give you the benefit of the doubt. If you are NOT a Troll from some other storage vendor, you are reading off of a competitive placard that NetApp provided to their partners recently. I know that because one of those same partners just could not wait to share it with me. NetApp has played favorites between a few of their excellent resellers, and this excellent partner had several deal registrations denied. They went in with Nimble instead. We won - every time.
If you are not familiar with the term FUD, I will enlighten you. It is an industry acronym for "Fear, Uncertainty, and Doubt" and the stuff you are citing falls clearly in that category. Focus on the SuperMicro chassis, SATA drives, the two drive shutdown stuff...la la la. It's sad to see a terrific company like NetApp stoop to EMC style FUD-slinging - but I guess it is to be expected with the migration of so many excellent people from NTAP, and the hiring of so many ex-EMC Sales people and moreover, Sales management.
The FUD you are spreading is largely outdated, and/or irrelevant. Jabs at the hardware layer on Nimble and implying that the device is therefore unreliable completely belies the fact that we have completely documented >99.999% availability across our customer base.
The mechanics of CASL may be a bit past you, but with a bit of research, you could figure out for yourself why data services do not need to be paused against an SSD failure (note that we have seen exactly 2 SSD failures in our company history). SSDs are Flash media in a harddrive form factor - we do not treat them as hard drives, because they are not.
Erm - because two simultaneous drive failures are statistically *extremely* improbable? Two drives failing from internal defects within RAID rebuild times has not *EVER* happened in the field on any Nimble away, and we are talking about *thousands* of years of combined soak time between the arrays out in the field. And since we have the InfoSight™ telemetry data, we can state that authoritatively.
As was stated before - Two drives failing within RAID rebuild window indicates some external force acting on the array - water rising in the data center, crazed sysadmin with a sledgehammer, etc. In those cases, if two drives HAVE failed within RAID rebuild window, Nimble's view is that the third failure is imminent.
Rather than run with no parity, Nimble has made the design choice to protect the data on the SAN, which will ensure fast recovery whenever the external force is mitigated.
-Disclaimer : Proud Nimble Employee
If you look at an EqualLogic controller, it isn't exactly an x86 processor in there. The OS would need to be ported over to a different processor architecture. Why bother? EQL storage layer is really primitive - the network stack is sophisticated and elegant. To take it to x86 or x64 would mean re-writing all the good stuff for the new processor - and at the end of all the effort you have a mediocre to poor storage layer running on an unproven platform.
Again, why bother? Take what's left of EQL down in the basement of Round Rock 1 and put them out of their misery. It was a good product whose time has past.
If you're stuck with one of these things, it's probably a good idea to time those reboots for when the SPs are running < 50% load. Only high school football teams can perform at 110%....storage arrays cannot. And I agree with the AC above - is the problem the zillions of lines of the 24 year olde Clariion code, or the 64 bit parts that were strapped onto it so that marketing could call it 'VNX2'?
This post has been deleted by a moderator
Focusing on the myriad shortcoming of Dell is missing the point. Michael Dell is a true innovator - but not in technology, rather in financial models. He pioneered the "Negative cash conversion cycle" and made it possible to do financial magic - manufacture products in such way that it was possible to sell below cost and still make money - not an easy trick. Several business schools teach courses on the Dell model.
It so happened that for his time in history, the PC was the best platform to exploit that model, and so he ended up in the PC business. In some other age, it might have been buggy whips. Between the negative cash conversion cycle and the direct model of Sales, he built a very successful company.
However, almost point for point, Dell in latter years has abandoned all of the practices that made them a multi-billion dollar company. First off, they attempted to embrace the channel, which is absolutely counter to the culture of the company in the first place (Please read Michael's book "Direct from Dell" for more on that) . Simultaneously, they stopped being a manufacturer and instead have shifted to an ODM model for the core products of servers and PCs.
I have no doubt Michael has a plan. I also have no doubt that he believes his plan would not be workable if he must show quarter on quarter growth to the street. I suspect the plan will not bode well for current resellers, and, if I hadn't already left the company, I would be taking the golden handshake now to get out. I am anticipating a big round of arbitrage where Dell spins off a bunch of the underperforming questionable acquisitions it made over the last few years - and there will be collateral damage across the company when that show starts. I also anticipate the channel teams will be cut further than they already have been. We're already seeing the Dell direct teams taking deals away from the channel daily - Sales upper management never really saw the value of channels , they have drunk the "Dell Direct" Kool-Aid too long.
The spread from #1 and #2 in x86 has been razor thin for years, and a few big deals will sway that claim easily. Dell does build gorgeous hardware - and those big deals are built on the DCS hardware that you can't get at the website. Couple that with a willingness to take a large deal direct in a heartbeat in order to recover the incredibly thin or negative margins, and the special thigh grease (to help the sales rep lower his trousers quicker) and special socks with ankle handles for a better grip when pricing to "take share", and I am only surprised it took this long...
Good luck with that services play, though. To err is human, but to really screw things up you should call Dell Professional Services..
Biting the hand that feeds IT © 1998–2020