Petabyte scale
Let me do the math here..
6 x 20TB = 120TB raw
Assuming these new X-bricks are the same physical size as the previous units your looking at a full rack for 120TB of raw flash.
6:1 data reduction on 120TB means around 720TB "effective"
Maybe I missed a digit somewhere because I'm not seeing a petabyte here. (their data sheet tries to claim petabyte by saying if you take a lot of snapshots and stuff you can see a petabyte)
Seems like an improvement but still a pretty inefficient system. Would be curious why they limit themselves to only 20TB of raw flash per X-brick when that brick requires 6U of rack space. Seems like there is a pretty severe limitation in the software and/or hardware that they have. What if the workload is not de-dupe friendly? You have to buy a bunch more X-bricks because you need more raw capacity.
Can XtremIO be scaled out on the fly yet? I read or heard somewhere somewhat recently I think that you could not add more bricks to an XtremeIO system online, you had to buy the footprint up front(or perhaps take a big downtime for data migrations etc). Not sure if that was ever the case or if it was if it is fixed now or what. If it's not fixed yet I'm sure it'll be fixed at some point.