Cisco has added performance features to its HyperFlex hyperconverged infrastructure (HCI), with all-NVMe systems, Optane SSD caching and FPGA acceleration. It has also extended HyperFlex out to ROBO* sites with central management. HyperFlex 4.0 was announced at Cisco's Barcelona CLEUR gig and features Edge 3-node clusters, …
Another Hardware vendor pitching hardware for a software defined solution. Is the story not the same with *insert hardware vendor* and their Optane/Intel/NVMe/commodityHW couplde with a few cloud buzzwords? I would personally not buy any solutions from the HW vendors and focus on the software ones such as Amazon, Microsoft, Redhat, Nutanix, and VMware, though Redhat is now suspect.
You don't know what you are talking about.
First of all, you must have missed that Amazon, Microsoft, Google, and Facebook actually design/manufacture their own hardware and their software stack tightly integrates with that hardware. Microsoft uses Denali for Azure, Facebook has Bryce Canyon, Google's Spanner and Colossus run on custom hardware as does Amazon. The software only model is a Wall Street marketing ploy to keep stock prices up and keep the Ponzi scheme running.
You also seem to have missed that Intel Skylake is using more power per core than Haswell and Broadwell and that every single hyperscale and service provider is going hard towards ASICs. And we haven't even scratched AIML on GPUs. You also must have missed the whole blockchain crypto circus on GPU and custom ASICs. In other words, the free lunch of ever increasing Intel CPU power that allowed ballooning crappy overhead from "software only" vendors like Vmware is coming to an end rather quickly.
We've tested HX with Optane and Samsung SSDs and I happen to know a thing or two about those All NVMe systems. The "slower" SSD clusters obliterated the VSAN and Nutanix boxes during the POC, our DBAs refused to continue testing anything but HX going forward. For reference, we run 80,000 MS SQL databases. To everyone's surprise the HX boxes were also much faster than our EMC XtremIO and more or less matched our Pure arrays but scaled much higher and thus were a lot faster and bigger than anything Pure could build in a single array. I am not going to say anything about the All NVMe test clusters we are testing because I promised the Cisco folks not to. But I will say this: Holy Shit, Batman!
I get that you are repeating this "software only" because we hear it all the time and it comes from all those yahoos who don't have hardware.Wwhat else are they going to say? In reality all the big players are going full on integrated stack because there lies performance and efficiency.
Ha. I love your confidence. Please allow me to destroy your response in two ways.
First, you, my friend, are in the .0000001 percentile for performance. God bless you. The rest of the 99.999999% are not running 80,000 MS SQL databases, do not need specialized hardware and certainly not NVMe nor any HW offload engines. So while you are calling all the "software only" Wall-Street mantra out for all their bloat, then why not call out MS SQL DB's with all their overhead? That's right, people like you throw HW at a software problem! : ) Yes, there is a forest through those trees.
And second, let me address your main point.
>> First of all, you must have missed that Amazon, Microsoft, Google, and Facebook
>> actually design/manufacture their own hardware and their software stack tightly
>> integrates with that hardware.
What Amazon and Google and Facebook have done is called a reference architecture.
Bryce Canyon is commodity designed by the open compute project (OCP). And so are all the others you mentioned from Microsoft, Amazon, and Google. So let me get this straight, everybody builds a HW platform based on open standard commodity parts; and then runs their software on top of it.
But, somehow HX is above all this?
If HX is so great, then how come none of the SOFTWARE companies you mentioned have Cisco HW for their major lines of business?
And that, my friend, is why HX and any other HW manufacturers' agenda (which is solely to sell more HW while charging a premium for their brand) is not relevant going forward.
P.S. \sarcasm on\ Have fun with your HX. I hear that support and stability are most excellent! \sarcasm off\
So much to unpack here, where to start? Did you just google Bryce Canyon, see it's OCP and come to your genius conclusion?
Facebook couldn't find commodity hardware for their disaggregated storage platform so they build their own. After they build it for themselves, optimized for their own software stack, they published it for everyone else to use freely via OCP. OCP is Facebook engineering, not some gnomes in Taiwan whose output gets adopted by Facebook. You must be exceptionally stupid not to understand this.
FB OCP components are definitely not used by Google, Amazon, or Microsoft. They have their very own. I can't even start to understand the stupidity behind your assumption. I am starting to think you are just not very bright.
Go listen to Andrew Fikes, Engineering Fellow at Google, to learn about the challenges they (and FB, MS, Amazon, et al) have been solving for the past 15 years before you continue spouting such uninformed bullshit.
The rest of your stupid post is just toxic bullshit. Not sure what your problem with Cisco is but they have treated us well. Your mileage might vary.
You are referring to the PCIe Hardware Acceleration Adapter, code named Hercules. Currently only offloads compression onto the card but soon it will offload encryption and all storage control/VAAI functions as well. That's not exclusive, the code can if needed be executed by the CPU albeit with penalty compared to execution in ASIC.
Calling it hardware encryption is technically incorrect. It is hardware accelerated software encryption and if I may say so, a very elegant way of doing things.
(Disclosure, I am a Cisco employee. All statements are mine.)
Hope you are doing well. Great article! If you don't mind, I would like to clarify a few things and correct a few minor errors in your article.
- HyperFlex EDGE is available in 2, 3, or 4 node configurations with cluster witness service for 2 node clusters provided through Intersight.
- HX EDGE is available for 1G or 10G environments, with 1G configurations having 10G storage and vMotion networks.That really helps a lot with performance.
- Yes, HXAN (All-NVMe) has Intel 3DxPoint Optane (Coldstream) Cache Drives and Intel Cliffdale NVMe capacity drives.
- Hyper-V and Stretch Cluster support lags 2-3 months for complex features.
Perhaps it is just a language barrier and I am misunderstanding British sarcasm, but I am wondering about this statement:
"The new thing is support for the central cloud-based Intersight management facility lessening the need for skilled IT staff at the edge sites, in theory at least."
I can ship my nodes to any location in the word, have a level 1 tech plug in power and network, and then I deploy the entire cluster from anywhere in the world via policies and profiles, drag and drop network and security policies, configure SD-WAN, push application stacks into ESX or Kubernetes, manage dozens or hundreds of clusters simultaneously, while having Cisco TAC connected to all of them and execute RMA and field support/hw replacement per SLA fully automated without the admin having to pick up a phone once. 1-click platform upgrades, integrated HCL management, 1-click DR failover with reprotect, and cloud based AI analytics.
Basically, all my ROBO clusters become remote drones controlled from a central starship. I don't know man, sounds quite like lessening the need for skilled IT staff to me. Understand the skepticism but I am sure a friendly Cisco guy can show you how this is done in practice. I've heard the British Cisco guys enjoy a pint or two so there might even be some interesting conversations afterwards!
Anyway, thanks for the article, hope this clarifies a few things, didn't mean to nag. I really like what you are doing over at Blocks & Files, keep up the good work!
Biting the hand that feeds IT © 1998–2020