Node = 4 blades
I'm (Chris Mellor) posting this comment for Peter Serocka -
As poster Yaron Haviv pointed out, blades and nodes apparently are mixed up here, with most of the numbers relating to the 4U chassis-nodes, while "400+" relating to blades which will act as the functional nodes of OneFS in the sense how "nodes" are usually operated and counted in OneFS.
NAS Ops rates are least well-defined, so let's have a look at the other figures first, and see what we can figure out from the published information.
8 x 40GbitE frontend + 8 x 40GbitE backend sum up to 16 connectors per "node" and that must be
"chassis" as it would be overkill for any kind of sub-chassis storage "blades" to have 16 connectors on them. Just imaging the cabling...
One the other hand, present OneFS nodes have 2 x Infiniband backend plus 2 x 10GbitE frontend,
both active-active balancing and failover, so a Nitro blade should have at least 2 x 40GbitE
frontend and 2 x 40GbitE backend.
That would mean 4 blades per node.
(Otherwise, 4 x 40GbitE plus 4 x 40GbitE per blade would mean only 2 blades per chassis - barely makes sense, and wouldn't match the performance and capacity figures either, as we'll see below).
So assuming 4 blades per chassis, and that all capital "B"s mean Bytes not bits, and a max cluster size of 400 blades (aka "nodes" in OneFS speak), we might have:
1 Nitro Chassis (4U):
60 x 15 TB = 800 TB
15 GB/s (10 times more throughput ***out of 4U***, compared to one Isilon X410 4U node)
1 Blade:
15 x 15 TB = 225 TB
3.75 GB/s (2.5 times more throughput ***out of a single node*** in traditional sense, compared to X410).
Large Cluster (100 chassis = 400 blades):
100 x 800 TB = 400 x 225 TB = 80 PB (with 100 PB for "400+" blades)
100 x 15 GB/s = 1.5 TB/s (as claimed)
Note that 3.75 GB/s per blade fits nicely with the assumed network connections of dual redundant 40 GbitE for front and backend, respectively.
The picture of the cluster on Chad Sakac's blog site ***shows*** 100 chassis and ***says*** 400+ nodes, another indication that 1 "OneFS node" = 1 Nitro blade, with 4 of them in going in one Nitro chassis.
Fwiw, if we divide a chassis-node's 15GB/s by claimed 250,000 NAS Ops/s that would give
us an *average* NAS operation block size of 60 KB, which kind of makes sense for mixed
workloads (reads/writes at 100+ KB/transfer, and numerous namespace ops with at a few KB/transfer.) Same result when looking a a single blade of course, as dividing both throughput and Ops rates by 4 with cancel out.
With a latency of claimed 1 ms, the NAS queue depth per blade (= OneFS NAS node) would be 1/4 * 250,000/s x 0.001s = 62.5, which also is a reasonable value.
Makes sense?
---------------------
He adds this point: As for the confusion between "nodes" and "nodes":
With the Isilon OneFS software it is very clear what a NODE is: one instance of the OneFS FreeBSD-based operating system, running on a single SMP machine with disks enclosed.
But for the hardware guys "nodes" are those solid pieces of metal that get mounted in racks.
I think with Isilon clusters, one should keep the original OneFS definition of a node, and refer to the new hardware units in a different way. "Brick" hasn't been used with Isilon yet ;-)
Too bad EMC didn't sort out their terminology before making the Nitro pre-announcement, but such confusion arises often with bladed compute clusters, too.