Wow, just wow!
Someone still complaining about seat belts. Wow. Just wow!
54 publicly visible posts • joined 24 Dec 2013
"taking selfies. Aside from that, I genuinely can't think of a routine use case for this feature, cool though it is."
The only use case you could identify would be better served with a higher grade sensor on the front facing camera.
Not cool in any way. It's a decent Droid phone spoilt by the 2nd screen, which just advertises the buyer is either extremely vain or purchases things without thinking them through.
John from Nutanix here
There is a difference between how Nutanix protects metadata and the actual blocks written. We don't RF5 data. Not trying to duck anything but it is covered much better than I have space for at nutanixbible.com. I'm not going to point to a third party's completely wrong account of how RF2 and RF3 work (the bad maths), not bs, me not spreading bs.
1 node cluster mirror data and meta data locally, 2 node clusters mirror between nodes. you don't get non-disruptive upgrades on a 1 node cluster. It's a reason they aren't by some definitions a true cluster. Nutanix is completely open about this - no smoke and mirrors here.
I got the impression with your 'first hand experience' you had been involved in the install. My suggestion still stands, whoever was doing the install should contact their local Nutanix Channel SE. We train our partners, free of charge, how to do this things correctly. Occasionally someone (not blaming any specific person, certainly not the customer) drops the ball. When this happens we try and make sure it is not repeated. Please do pass on the suggestion, Nutanix will act on the request.
I think the big storage node thing is a bit of a straw man. It doesn't make sense to add a single massive storage node to a cluster made of up of small general purpose nodes. Equally 3 nodes of 80TB is probably a sub-optimal design, five nodes of 40TB would do a better job (and provide more useable capacity because of the smaller n+1). File storage is the most common place big nodes make sense and to get a better price point for unstructured data we now have an unstructured only license (Files now, Object (Buckets) very soon), which makes unstructured only clusters much more cost effective. I accept we are not always the lowest price option and for file workloads we've tried to address this. It is a new thing so we are adjusting aspects as we go in response to customer feedback.
I do not believe I have stated anything is incorrect without providing evidence/an explanation. You say sales are down - that is absolutely and publicly demonstrably not true. Software and support revenue was up 42% compared to the same quarter last year and 47% (to $1.1bn) for a rolling 12 months.
John from Nutanix here
I referenced the nutanixbible.com site because it details for anyone interested how we work. It is a judgement call as to if RF3 is needed. There is some bad maths out on the internet on that topic.
I am sorry someone gave you bad advice about an 8 node limit, it really doesn’t exist. We have a lot reference customer running >8 node clusters. Until today I had never encountered this mistake. We don’t have a problem with interconnect bandwidth as we scale. I appreciate some architectures do. Mostly we do it all on two 10GbE per node, but we can have more ports and/or 25/40GbE for really bandwidth heavy environments. That is to do with the application bandwidth needs, not scaling the cluster.
Storage heavy nodes are to allow clusters that just need more storage capacity to scale in that dimention (when more CPU and RAM are not needed). It’s quiet a nice answer to one criticism of HCI solutions - that you have to add more CPU/RAM if you only need more storage, and these nodes don’t need an ESX license, a nice saving for ESX customers.
Again I am sorry you were trying to install our software on something not on the HCL. We provide free training to our partners on these topics. The HCL is there to avoid such mishaps. I was double checking an HPE build today to be sure it was all on the list. I suggest get whoever had the problem to contact their Nutanix Channel SE.
1 node (or even 2) is not, by some definitions, a cluster. That is true. It is not true to say they have no resilience. They are local redundancies and can replicate to another cluster. They do have their use cases, though they lack the elegant flexibility of 3+ node Nutanix clusters.
John from Nutanix here
AC - I am sorry your tender experience left you feeling Nutanix wasn't the right choice for you. It may have been that for your requirements Nutanix was not the right answer. It sounds like the reseller and Nutanix did not do a good enough job for you. Without details it is hard to say more (and I wouldn't in a public forum). Even when we don't win a prospect's business our aim is always to give a good account of why we are proposing what we are, and if there is a price premium justifying it. I hope if you deal with us at some point in the future you demand that from us, it is the least we should be doing.
John from Nutanix here
Our larger clusters are not a patchwork of smaller clusters. We scale from 3 to the number of nodes that you need in a single cluster. Nodes can be retired/repurposed as and when it suits. (There is another HCI vendor who 'federates' to get to higher node counts, we don't need to do this to scale.)
Our Prism Central management tool handles 1 or multiple clusters.
People tend to have more than 1 cluster because they have two sites and/or want DR.
At a single site people may want to use ESX for some workloads and AHV for others (for example Citrix VDI). Prism Central makes managing clusters with different hypervisors very easy. As a company we believe choice/openness is a good thing.
At a certain scale it can make sense to have more than one cluster at a specific site. For some customers that might be in the 16-20 node range, for others it might be at 50 nodes. I've had a customer who for compliance reasons had two 3 node clusters per site - without the compliance needs they could have deployed a 4 or 5 node cluster per site for all their workloads. Like so much in IT the answer is 'it depends'.
John from Nutanix again.
Nutanix is very public about its eco system partnerships. They can be checked out here: https://www.nutanix.com/partners/technology-alliance-program/
Definitely not trying to go it alone.
Flawed architecture? I joined Nutanix precisely because of its superior architecture - perhaps a debate for another time but we are extremely open about how we have built our product. Details available here: https://nutanixbible.com/
"Locked in hardware" - this is so 3 years ago. We have just announced yet more hardware platforms certified and customers can separate their hardware and software purchases. They include HPE, Cisco, Dell, Intel, Lenovo. As a company we believe in choice, not just for hardware but also for hypervisor.
Our cloud efforts work with Amazon,Azure and GCP as well as our own cloud services. We are building the tools to help customers avoid cloud lock-in. Worth checking out our Beam solution, which can help cost and security optimisation of public cloud use - entirely independent of Nutanix HCL.
John from Nutanix here.
There is no 8 node limit or 5-8 node sweet spot. We have a lot of clusters larger than 8 nodes. We can mix different workloads in the same cluster, for example VDI and databases. We can also mix hybrid and all flash nodes, and different CPU generations in the same cluster if that suits. When it makes sense to have two clusters rather than one depends on a range of factors, many not to do with Nutanix. The right number of nodes depends on the particular customers requirement. People really can start a cluster at 3 nodes and grow it a node at a time to the size they need. We have options for 1 and 2 node clusters if that better fits. We are genuinely a very flexible platform capable of handling the vast majority of organisations computer infrastructure needs, now and evolving non-disruptive as the requirements evolve. Anyone doubting Nutanix can scale beyond 8 nodes should speak to some Nutanix customers who have done so.
We are introducing a set of additional capabilities beyond our core HCI platform, for multi-cloud environments and for workloads such as file sharing and object storage. IMO It's a very interesting set of ambitions building on a really solid core HCI product. (I have declared my own interest.)
I think the Khaptain speaks a lot of sense. I know a little bit about French wine and less about all the rest. My solution is to get help.
There are non-national operators like Cambridge Wine Merchants, I use Naked Wine and the folk at Majestic have made me some good recommendations when I've popped in there. Other outlets are available.
Don't forget Aldi and Lidl. If you know what you like they have some fantastic deals, but they won't help you find your new favourite wine variety like a good specialist.
With the right advice you can get a great bottle of wine for around £10. A poor choice at any price will disappoint.
Some people look at HA as being protecting against single points of failure and stop there. You don't hear about all the times 1 thing happened and some point later normal protected running was re-established. You have to plan for more than 1 event. 3 tips to start with.
1- Don't turn check sums off. If a supplier suggests you turn check sums off on a production system work out how quickly you can stop using that supplier. Check if any benchmarks or certifications cited involved turning checksums off - and if they did demand the ones with checksums enabled.
2 - A snapshot is not a backup.
3 - A replicated snapshot is not a backup.
As others have mentioned if you are going to run services such as compression and deduplication then these will consume both RAM and CPU cycles (nothing is free).
As well as the volume of data (TB) mentioned the number of files/objects also matters as this generates additional metadata (nothing is free).
The other factor not mentioned is the number of concurrent users. 8GB of RAM may be fine for a home NAS serving music to Sonos etc. for a few users, 5-10 active connections tops? Plenty of small businesses might not stress 8GB, and things like OneDrive may make more sense than a Cloud gateway for these users anyway. When the number of connections may run into 1,000s then more resource is needed (nothing is free).
Obviously some things are free, like open source software, which often turns out to be free like a puppy. (Even free stuff isn't free.)
I find this old 'big iron' very interesting, having begun my working life with mainframes, but it's niche is narrowing as standard platforms become more and more capable. A two socket unit from loads of people can have >50 cores and 1.5TB of RAM. There are also quad socket boxes with >100 cores and 3TB of RAM. I would suggest if the application has been written in such a way it doesn't fit on a number of these servers it's probably very badly put together. I know the 'big iron' folk speak about ease of management but Nutanix has taken that to a whole new level and scales to 1,000s of cores and many TB of RAM in a single cluster, and makes managing multiple clusters a (relatively) trivial task. The future is software driven, hardware does matter, but it no longer needs to fancy. SuperMicro, Lenovo, Fujitsu, Dell, HPE and others all produce decent kit, what matters is the software. I still wish my friends in the 'big iron' world well, it will be with us for many years, but IMO for a decreasing set of applications.
I’m also ex-Nimble and without being behind AC I’d like to say how I disagree with the two former workmates on this thread.
To the person still in HPE - I left in January taking a significant pay cut. I am lucky my kids are grownup and this means I am much freer to make choices than most, but job changes are not always about money.
To the ex-HPE person I would suggest listening to Meatloaf’s ‘Objects in the rear view mirror.’. I am not disputing your description but look forward not backward. Enjoy the choices you have made, hopefully for some positive reasons and focus what you are now doing, let HPE worry about themselves.
To all the ex-Nimble people, still in HPE and the many now in other places, I hope life is treating you well.
The reason you might want to create the recovery thumb drive is if it is the only computer you will have access to. Computer down ... internet down ... creek without a paddle.
I did it for the laptop we bought my Dad. Fingers crossed I don’t have to ever talk him through using it.
@Charles 9 30,000 deaths pa does not all come from the person who is determined to kill people (they will be hard to stop if determined). A chunk are people with mental health problems killing themselves. First check everywhere (including private sales) should be does the buyer have a clean mental health (and criminal) record. Another chunk are escalation killings, where it has 'kicked off' and because those involved have free access to firearms the triggers are pulled before the brains are engaged. Right to carry is very dangerous. The saddest ones are the children who shoot themselves or their siblings/friends by accident. Unusually for a Brit I do believe in the right to bear arms, but IMO the system as it is in the US is badly broken. If you want to understand how broken look up what happened to Charles Vacca (clue, 9 year old with a machine gun).
Charles 9. What a truly weird comment. We already know the death count is higher than Bath township. The fact that there are more dangerous things than guns is utterly irrelevant to the question of gun law. There are 30,000 deaths a year in the USA due to firearms. To the rest of the world and plenty of Americans this is an extraordinary number. The NRA say this is the price of freedom, it's like the alcoholic who says their breakfast quart of vodka is the price of freedom, it simply denies the problem. The horrific events in Vegas also demonstrated one of the many ways the idea that the answer to a bad man with a gun is a good man with a gun is a fallacy.
"even if you as a parent work diligently and carefully to put your child on a hetrosexual path."
I think this may be very illuminating. NoRottenPi is convinced that if you want hetro kids you have to 'work diligently and carefully'. This suggests to me some deep inner conflict, probably based on the person being gay but having some retarded view of sex imposed on them by narrow minded parents/guardians/teachers/priests.
I feel sorry for NoRottenPi, they are clearly very messed up.
John from Nimble back
I used 4K numbers because it is the one the storage industry tends to publish. One of the many brilliant things InfoSight does for Nimble customers is let them understand (if they are interested) exactly what every application and volume is doing in terms of IOPS. I do see a lot of small IOPS out in the field (8K and below). InfoSight lets the whole customer base benefit from the hundreds of billions of data points it ingests everyday.
Dr Adamson's blog here gives an idea of the type of data science Nimble does about IO sizes. He definitely understands it better than I do.
I would say if a 'higher end' array is handling IOPS less well than a 'lower end' array then there is something wrong with the 'high end' array. I've worked with XP and 8 node 3PARs and particularly in all flash configuration they can support numbers of IOPS well above the million+ IOPS of a Nimble cluster. That said, most storage users don't need >1m IOPS.
"Nimble is relatively slow"
John from Nimble here.
An AF9000 will delivery 300K random 4K IOPS on a 70/30 read/write split with all data services enabled and at consistent <1ms latency. IMO This is fast enough for most workloads. If you need more up to four may be clustered together and administered like a single array.
My thoughts on the acquisition are posted here:
JohnW from Nimble here
Not K34, it was J49 (I think) on 3PAR and J37 and J47 for the XPs. J49 suffered a little from everyone still trying to get their heads around 3PAR. Not a bad exam as such but probably too much simple/dull material in the syllabus, so lacked focus. J37 & J47 were tough for different reasons. They were aimed at admins, service staff and pre-sales, making them tough for any of the target groups. J37 was better then the exam that went before and J47 improved again, but still unfriendly exams.
JohnW from Nimble here
To Zerolab, you are right, every system can run at 100%, one nice thing about Nimble is it won't choke if it happens to be 100% random writes. What the 10,000+ Nimble customers experience is a platform that performs consistently well with mixed workloads and backs that up with the industry's best analytics to learn from the install base and get the best availability and support experience. Is Nimble going to win a synthetic benchmark? Not against the like of Violin and DSSD which are built just for speed. Not against systems configured RAID 10 and data services turned off. A Nimble cluster can do a million IOPS at under 1ms latency, with features turned on and all the other benefits. Nimble makes a great platform for most applications and that is how we have got to >10,000 customers. Look at sites like reddit for how Nimble customers feel about using our arrays in the real world. IMO HPE have made a very smart move.
JohnW from Nimble here
OK, I'll take the bait on this one. I am one of the Nimble Enterprise guys. Nimble is a great solution for Enterprise customers as, depending on requirements, 3PAR and XP can be. I should know, I helped write 2 XP exams and the first HP 3PAR exam. (Apologies to anyone who had to sit them, they were not easy.) I was excited about 2017 before the announcement. When the deal closes I'll post on LinkedIn my views but since I digested the news I've been going round with a big smile on my face, as have the other Enterprise folk here.
JohnW form Nimble here
We do use SuperMicro, nice kit, good hardware does matter. We don't use an Infiniband interconnect but rather the Intel NTB, which is a PCIe direct connection between the controllers and built into the Intel server chipsets we use. We're not alone on this as it works well. The clever stuff is in the software.
JohnW from Nimble here.
John Smith 19, you can check out pricing here. From 10 cents per GB per month. Not for me to say if that is expensive, all I can say is it is Enterprise grade flash block storage as a service for multi-cloud with the best analytics in the market.
JohnW from Nimble here.
AC throws out 11 questions. (Slight whiff of cowardly FUD.) Too many for me to try and answer here. For brevity I'll have a stab at the first one and a half. Yes, NCV are encrypted with AES-256 encryption and each volume has it's own key. Unlike when encryption comes from self-encrypting drives this allows for keys to be disposed of on a per volume/application basis.
I suggest people check out Nick T's blog here for what this game changing innovation is really all about. http://spr.ly/60048ncCw Got to rush, busy times at Nimble :-)
A few days ago I was at the top of a local hill and four young lads had just finished cycling up. They were probably 10-13 years old. The eldest was the most out of breath and called on the others to stop. Years ago I wouldn't have been surprised to see him lighting a cig. It being modern times and he got out his Vape machine. I'm unclear if this is progress. (I smoked for >20 years and it was without doubt the stupidest thing I have done.)
Disclosure: Nimble Employee, my own views
I like this idea that Nimble OS is somehow trying to retro fit flash at a late stage. ROTFL The first product Nimble designed was an all flash appliance. The first product sold was a flash optimised hybrid one because that was the larger addressable market. In the hybrid disk/flash combination flash provides the read acceleration, writes are pretty good on Nimble whatever media. Having an all flash flavour was always anticipated. Nimble OS is designed to be media agnostic. The old and 'new' all flash architectures remain tied to the back end media for performance. Nimble takes a different approach that offers more to customers today and is highly adaptable. In the last 3 months Nimble added another 580 net new customers taking the total to 8,160. We may not yet be the size of NetApp let alone EMC but we continue to grow while they shrink and our Net Prompter Score (NPS) is 85. Since EMC won't publish their NPS the only conclusion I draw is the big dog has something to hide. It is selling to Dell so it can be more private.
AC - Very happy to offer an All Flash Nimble for testing. Those that have tried it find it a generation ahead of the competition. It's advantages are too numerous to list here. Once you've tried it I am sure your perspective will change.
As a final point a side by side review of InfoSight shows it to be years ahead of anyone else and it continues to evolve. Nimble's advantage in that area is not going away anytime soon.
Disclosure - Nimble Employee
Nimble offers triple parity plus data protection without traditional RAID to manage. The Nimble OS manages protection/capacity automatically and does not require configuring the 'back end' either at initial setup or adding capacity. We lack the concept of RAID groups, aggregates etc. that traditional architectures are built on, making storage management application centric rather than about setting up the 'back end'.
Disclosure - Nimble Employee
We did indeed move to triple parity with the 2.0 release of Nimble OS (now at 3.2). It is more advanced than a simple triple parity. This really comes into play because SSDs don't fail in the same way as HDDs. Rule #1 of storage is don't lose the data and the old RAID 5/6/10 just don't cut it with SSDs (or large HDDs).
Umesh, one of the Nimble founders, explains it much better than I can at: https://www.nimblestorage.com/blog/the-reliability-of-flash-drives/
Not all AFAs are the same.
It's generally understood dedupe can be very useful in VDI, which is not really a static environment. On the other hand video archive is as static as it comes but only benefits from dedupe if completely unmanaged. As the folk at DD and elsewhere have shown dedupe is a very useful technology for backup. Compression tends to be much more effective than dedupe in the database world. Ultimately it's the nature of the data set that determines if dedupe may be useful and while plenty have over sold it that doesn't mean it, and its often forgotten twin compression, are not useful across a wide range of workflows.
They would be expensive as low volume and the rebuild times would be unbelievable high. Basically no one woudl buy them. Let's say these devices can sustain writing at 150MB/s. It would take 11+ hours to fill up a replacement 6TB drive if it were full (assuming no other bottlenecks). On some older designed storage platform a rebuild of a multi-TB drive can be weeks. Dual parity is looking inadequate to protect data on current large drives, even on well designed platform. Your 5 inch drives would be an order of magnitude worse. With large drives (and flash devices) the need is not to step up to physically larger devices but for better protection systems than dual parity. The challenges for storage are not how to get the most data on one device but how to (a) protect the data and (b) get the performance required at a price people can pay. If the data requires so little performance/access it would work in a 200TB drive then a tape will be cheaper.
Why No Super-Sized (physically) Hard Drives?
Older hard drives used to be both taller and wider. Current large drives are 3.5 inch and as someone has said can fit a slot in a 1U server (though most servers go for 2.5 inch these days). I can remember 5 and 8 inch drives and they were taller as well as wider. Before these there were even larger drives, mostly so expensive people still used paper tape/cards as storage. I was told it's a bit like silicon wafers - the smaller the unit the better yields so lower costs. The larger formats were dropped because the smaller ones made more sense. Down to the 1 inch drives which were killed by falling flash prices and the cost of engineering moving parts that small. Current price ratio of flash to large drive is probably a little under 10:1 and in slow decline, so drives will be with us for a while. Since the platters have been aluminium or glass for some years the risk now of rust being an issue is probably very low :-)
A brief history of hard drives is at: https://en.wikipedia.org/wiki/History_of_hard_disk_drives Warning, it's not very exciting.
Disclaimer - Nimble Employee
You would expect me to say this but Nimble has been leading the innovation in storage analytics for years. Don't trust me - ask for a demo and you will see why Nimble customers love InfoSight so much. I believe it's a major factor behind our net promotor score of 85.
PS InfoSight is part of Nimble Support and runs in the Cloud, so requires no in house infrastructure and carries no add on charge.