IoT and On Premise?
Trump, who has a higher IQ, needs to talk to Michael and do some explaining.
Whoa! Never mind, the demo might blow up North Korea!!! Oh, crap!
11 publicly visible posts • joined 3 Jun 2013
The software needs to achieve dis-aggregation for a grid of HCI. Yes, software is the bottleneck, and most of the parallel block services suffer from being implemented when the governor was the HDD speed, then SATA/SAS. NVMe is the curveball. Besides adding in 100Gbps bandwidth as we saw at SC2016, we need to see network software enhancements to increase the required parallelism and dis-aggregation with QoS. As another article today puts it, in 2017 we are at a tipping point. Who can code the quickest? (Get those students working on dcache.org and SC2016 doing it! -- Whoa, what if it comes from Open Source instead of a vendor? Truly Open COTS!)
I concur. Talk about hyperconverged upgrade is somewhat naive and cavalier at this point. Think of an engine with pistons, balanced and tuned to work as whole. Can you add a larger piston or two or optimize the valving of the additional upgraded pistons? It is pure physics. The engine will run only as fast as the slowest piston. You wasted all the money it cost to upgrade.
Unfortunately even product managers and marketers are making these claims about ease of upgrade and heterogeniety of nodes. They ignore their own manuals like the VSAN Admin Guide that has a section on balance. If you are writing about things just based on product fliers you lose and mislead.
Then lets talk about inserting these upgrade nodes into a rack live and while in production. Is there enough power? Will it disturb power balance and redundancy? Network? Should I say more? -- None of this has been measured. Talk of TCO at this moment in time lacks empirical data.
Get it!! The data I mean. Run this stuff, upgrade it, change node balance and measure in between and let us all know what you get. It will be a nice physics science project.
"What Thomas and the SBU want to accomplish is nothing less than a transformation of servers towards being flash-intensive in-memory data processing rockets on the one hand, and, later we think, low-latency active archive data tubs on the other."
Well, yes, the 2-tier approach is necessary. But, it needs to be automated by policy and transparent to apps. Is this the software being written?
The problem with commodity HW is that it does break even if the HW is on a compatibility list. Once diagnosed, now there is remediation. Who is going to pull the chassis out of the rack to replace a DIMM module? Which one do you replace? If the controller went down is it a FRU, i.e. can it be replaced while the system is hot? Or will the chassis need to be opened! Who does this? Who trained them? -- And if this is a LOM facility, is access granted through change management during business hours and will this service not threaten uptime of any other devices?
We used to test vendor storage systems by yanking components during a test load and making sure failover of components worked. I don't see these cheap, commodity systems getting the same vetting process before being placed into production.
I have seen a large financial company, as an early adopter to this venue, get burned. They certainly don't want to admit to this -- yet -- but, what was billed as a low-cost storage initiative got nick-named cheap storage with a very high maintenance price tag -- with experienced enterprise storage admins relegated to A+ screwdriver jockeys.
--Be very careful, here. I haven't even addressed evaluating the maturity and size of the open source community for the software of choice. Who is going to fix the panic that brought the system down? Is the author of the code still around? When will this expert be around to escalate your problem?