Re: The security folks will say...
"With HCI, on the other hand, there's really no argument - patch it, do it now, and don't try and make excuses."
Except you can't....... https://kb.vmware.com/s/article/52345
25 posts • joined 21 Aug 2010
I'm an IT outsourcer (Managed Cloud) and I DO care about my customers - but then I'm totally UK based and realise happy customers are long term ones - longest running one 15 years and counting.
I thought all this outsource to India shit was dead in the water now, never seen it do anything but cost more in the end.
Still amazes me that companies go for cheap hosting deals for what is in effect their mission critical hosting, and then get surprised when it all goes wrong. If your business really does depend on your site being up, then you need a DR plan (if your cheap as chips host doesn't supply it), otherwise you just have yourself to blame if your business goes down the pan.
"The ever increasing problem is that the 'smart' meters that are being forced on people by the government don't have that security and are ripe for the script kiddies to take over."
Can you imagine a botnet of over 20 million devices simultaneously doing a DDoS and taking out the power grid? Brave new world, coming nowhere near me if I can help it. I keep changing suppliers every year so they lose track of whether I have a smart meter or not. I had heard that if your gas smart meter loses Internet connectivity it cuts you off!
They can keep the Internet of Shit
It is the constant thing with all M$ products that they think they know what you want better than you and force change it. This has now crept into Apple with it's bloody autocorrect changing what you want to say, sometimes without even giving you an option to reject the change, so you have to go back over and over again or risk sending a non sensical message.
Why don't the devs behind this just piss off with their auto-corrects? By all means underline a word, or highlight a cell that may be wrong and give the user chance to correct or ignore the warning, but quit the auto-correct shit as it actually makes us waste time going back and forcing what WE want to actually say! </rant>
With the speed of CPUs today inline compression is an absolute 40-50% extra space gimme - why the heck would you NOT want it in 95% of cases (which is a lot)?
Dedupe is a little more esoteric, and despite claims by some vendors it is suitable for everything and can't be switched off, I've seen evidence to the contrary, and dedupe rates can crash below that of inline compression in some cases.
Erasure coding is something else that's going to throw you 50-60% more space for absolutely nothing, so anybody not considering that move could look expensive on the £/GB stakes.
We are definitely moving to the commodity flash with NVRAM/enterprise flash tiering solution as the New Hybrid storage solution.
Disk will still keep ahead in £/GB for a long time - when you consider longevity in the TCO between TLC flash and nearline SAS HDD - but primary storage is heading inexorably towards all flash, and faster than you expect.
Unfortunately Atlantis Hyperscale just altogether has a better all flash system at prices lower than most EVO:RAIL vendors. These extra options will just hike VSAN prices even higher over Atlantis, and much as everybody tries to stoke the price war between Nutanix and VSAN their paths are widely divergent - VSAN ain't controlling the whole DC anytime soon without a multi-layered plethora of complicated different applications with steep learning curves.
For me Atlantis is the sling-it-in slayer of low hanging HCI fruit, and Nutanix is strategic Enterprise HCI. VMware seem to be, for the first time in their history, trying to shoehorn into a space rather than creating it.
Nate you are way off the mark with regard to both SuperMicro and Nutanix.
Supermicro is very obviously the base for a lot of different branded products out there and is the bedrock upon which Nutanix solutions are built. I have built HA all flash storage solutions on SuperMicro equipment and see no different failure rate than I do on any other piece of hardware.
If we are talking remote management I just lost a Dell server when it didn't come back from a VMware update. No matter, I thought, I'll just hop on the DRAC and see what's going on - except the DRAC had crashed as well.
Physical presence required!
On the charge that Nutanix isn't mission critical, well I say HA! Their biggest deployment to date is over 2000 nodes for a single customer and believe me if you knew what that deployment was you'd rewrite the definition of mission critical.
Funny how you write a blog post and then all of a sudden people make independent claims that are actually pretty much answered in the blog - saves a lot of writing (if this link is allowed that is): http://blog.millennia.it/2014/06/03/nutanix-defending-the-hardware-appliance-in-a-software-defined-world/
If the link I provided isn't allowed then just type Millennia Blog Nutanix into google and it'll likely be the top hit ;)
Nutanix are NOT delivering a lock in product, they are a software company that supply a product that runs on COTS - so where do you see lock in like VSAN - the appliances they also sell?
The appliances are there for one main function, central support and consumer-like ease of use for those organisations that are just fed up of vendor finger pointing when they have a problem on their multi-vendor hardware. Even the supposed converged systems from EMC and NetApp are just bundles of kit with a level 0 call centre in front. Get a problem and you get directed to a relevant support team, and if they don't agree it's with them then expect hours, days, or weeks of pain...
Nutanix support everything in a highly rack dense 4 nodes in 2U appliance made of commodity parts, you can build one yourself - try that with Simplivity! Nutanix will also support Arista switches internally and even own your VMware service ticket to give you a total hands off customer support experience.
But, I hear you cry, if they supply hardware then they cannot claim to be software defined anything! Well, NDFS is their product, not their appliances; so if you have a good case, like building your own kit and putting up with decentralised hardware support, and you ask them REAL nice, then I'm sure a license only SOFTWARE version of NDFS will be provided to keep you happy.
True Software Defined Data Services, and unlike VSAN hypervisor agnostic.
True, if you get hooked on NDFS as a storage system then you can't port it to a regular monolithic SAN system, so you could claim some sort of lock in there. But your data isn't locked in, you could Zerto it out to anywhere in a heartbeat - so where is the problem?
Note I'm not a Nutanix employee, but I am a partner. I became a partner because finally I can see a company that combines the right people with the right ideas, the right attitude, and the right product. They are going places with knobs on, and I want to sit on their coat tails.
Having had a lot of problems with freshly uncloaking startups I would prefer to wait and see. Nutanix is a pretty mature product in it's 3.5 incarnation of the OS now and it isn't gonna slow down anytime soon.
For Enterprises the idea of a 1 stop shop vendor support that won't blame other components in your inventory is compelling. Nutanix will not only cover Arista switches if bought with the solution but will also own VMware issues that arise when running on the kit (with your support contract of course); so should you have an issue it's just a case of picking up the phone and hollering a Picard-esque "Make it so!"
Try all that with your Dell kit, Maxta software, VMware hypervisor and SuperMicro 10G switches ;)
Having said that it won't stop me testing Maxta for SME converged cluster proposals....
..without prices? Servers are available on large discount if you are buying a lot of them, but don't expect them for pennies if you are just buying one for your FusionIO powered storage.
So the bottom line is what would 15TB of RAW Fusion IO flash be in a server kitted with the CPU, RAM, and IO modules to power it?
If you are significantly below £100k in the UK you have a fight on....
So this broken OSI model type protocol therefore can't possibly work in the real world then obviously? The 1300+ customers have just been hoodwinked into thinking their storage works phenomenally well...
So beating Isilon in it's own niche space of media delivery was a fluke? A combination of Coraid and Nexenta hammering NetApp Filers is just a figment of the testers imagination?
Measured throughputs of over 1800 Mbytes/sec and 4500 IOPS from a single shelf of disks couldn't have happened and Coraid bunged ESG Labs to come up with those test results?
How VMware does MPIO is actually pretty relevent considering it is the biggest virtualisation platform, so when I build a iSCSI solution alongside an AoE one and not only is it way quicker to configure but throughput tests show AoE handles random virtualisation workloads much more efficiently and on the performance graphs I see more throughput.
Obviously I must have been dropping too many magic mushrooms lately and I'm seeing things.
So on that basis why don't you take issue with EMC, NetApp, Isilon and VMware and see how much traction that gets you? Coraid makes an easier target but it doesn't make it wrong.
I'll admit not knowing StarWind had an AoE driver as I don't know the product inside out, although I respect it for the reviews it has had - looks a great product.
Coraid's biggest sin was growing organically in the Linux space, because it has lost a lot of years when it could have been providing the things you found missing when you tried out AoE. That delay will hurt the progression of AoE in the face of iSCSI evangelists. My future doesn't rely on the success or failure of AoE so I'm happy to sit it out and see what develops.
OK, let's take your issues:
iSCSI is a connection oriented protocol that requires a session to be established across a SINGLE LINK and the data is then transmitted across it. iSCSI can employ other links in a round robin fashion but if you take a look at the connections in VMware as an example you will see on a 4 port iSCSI system 1 is active and 3 are in standby. The others are only used if a session is broken and then the next available one is used. Some other implementations will perhaps constantly cycle across each link but ONLY ONE IS USED AT A TIME - that is iSCSI (and to some extent FC).
AoE splits the data stream and sends it across all available ports in parallel. It is a connectionless protocol that does not require the packets to arrive in order, and if a packet doesn't arrive then it uses a datagram retransmit (microseconds) as opposed to a TCP retransmit (up to 200ms).
Therefore comparing a 4 port iSCSI to a 4 port AoE with large amounts random traffic as you would see in a virtualisation environment the throughput is as much as 2 or 4 times greater than iSCSI.
True if you only had 1 AoE and 1 iSCSI port then there wouldn't be a great difference, but the lack of TCP/IP stack in AoE would still make the connection more efficient with less need to offload the data transfer process with extra on NIC CPU power.
Coraid supports 10Gbit, using HBAs on VMware, Windows and OpenSolaris, and native NICs on Linux and XenServer. 40Gbit on it's way (been around a while in the form of 4x 10Gbit ports running in parallel), as will be 100Gbit in time, you don't build an Ethernet SAN and then not support the development of Ethernet!
AoE drivers will be getting a lot more common place, and I wouldn't be surprised to see one from StarWind soon. We know of somebody who took a couple of weeks to get one to work on a CCTV camera, so the camera talks AoE Direct to the storage with no configuration (as there are no IPs) required. Let me tell you in their tests it wiped out iSCSI as a transport medium with HD feeds needing all the throughput they can get these days.
Nobody ever got fired for considering anything, botching up an install and losing the business money yes, but just thinking about it - that is just silly.
It always amazes me what vitriol new ideas inspire amongst the die hard users of an existing system, and this usually gets to its worst just before that new idea really takes off - so keep it coming!
Ivan - while I accept your point on clients you miss the point in that storage is a massive part of whether a VDI project works or fails - and they all too often fail in a big way. Just having a better performing connection for thin client access is pointless if your 1000 desktop VDI project is either:
nailed on performance because the amount of required storage IOPS were underestimated, or
considered a poor ROI because to get that peformance you have to spend a million quid on storage!
The whole point is that you don't throw crazy money at AoE, you get a transmission protocol that can handle the traffic of a massive VDI platform at equal or better performance than fibre channel at as little as 20% of the cost. Coraid can provide the mass storage on this platform and to some extent the peformance for some solutions with SSDs, and for real far out performance WhipTail can consolidate to such an extent that you save even more more money per seat. FOUR racks to 2U with the same IOPS, what part of that do you not find amazing?
However until AoE becomes accepted across multiple vendors a full and affordable end to end Enterprise VDI project is still just on the edge of reality - and if nobody talks about AoE then it will never become one.
"Now that we know AoE is ATA over Ethernet, isn't broadly supported, lacks enterprise features, isn't generally supported and generally isn't Enterprise Ready, let's talk about the other two."
At what point would you consider something "Enterprise Ready" then? Does the fact that an AoE SAN serves as the backbone for one of the largest private clouds in the world for a US Govt Agency, 2 Petabytes and > 40,000 VMware nodes, not make that statement look just a little rash? Or perhaps customers like NASA, the Human Genome Project and various large acedemic institutions for the HPC clusters, plus over 1200 others - mainly large multi-terabyte systems demanding high performance.
By making such a dismissive statement you are missing out on a genuine opportunity to look at something different, for a cost that just might be the key factor if we go and double dip the western economies.
Perhaps you are also in the "never have a Dell in my data center" camp as well. If you look at the attitude to them 10 years ago you see a blueprint for how economies of scale can really disrupt a market.
I agree. CORAID has gone channel, which is why prices are now hidden, but it has such an incredible product range it could not keep going on an organic Linux community friendly growth path forever. Since going NetApp, as you put it, it has achieved exponential growth and actually been able to take NetApp accounts on directly. I appreciate your frustration but after all most of us are in business for the return and the real money sits with the likes of NetApp now, and to get access to it you need to look like a company the large organisations feel comfortable doing business with, and this is generally at the expense of the little guy.
Because AoE is layer 2 and therefore not routable it makes it inherently secure, and why does this stop you having a dedicated storage network? In fact it enhances the dedicated aspect. IF you need to connect to another AoE network, such as site to site DR, then you can tunnel AoE between storage networks.
Biting the hand that feeds IT © 1998–2020