Lower IOPS inherent with S2A, not necessarily a problem
The S2A approach trades IOPS for strong guarantees on achievable streaming bandwidth and data integrity. All the disks are organised as 10 disk ECC striped virtual disks (think 8+2 RAID 6, but with ECC instead of simple parity, and with 512 *byte* stripe segments); every access is a full stripe write or read, with the ECC always written and read. It is obvious that the achievable small random read IOPS with that approach is 1/8 of that achievable with 8+2 RAID 6, and the small random write IOPS will be 3/8 of 8+2 RAID 6.
Why would they set it up this way? Well if you were after streaming bandwidth rather than IOPS, you were going to be issuing full stripe reads and writes in any case, and this way you pay no penalty when up to two disks in any virtual disk pack in or hold a go slow. You wouldn't put a transaction processing database on it unless you were desperate or stupid because that's not what it was designed to do.
Read/modify/write cycles don't tend to happen with S2A because any modern filesystem is writing data in 4k or greater chunks, and a full stripe just happens to hold 8 data sectors, which is 4k of data. FPGAs are great at slicing and dicing data in fiddly ways - they are fine with doing ECC on sector chunks as opposed to the the larger chunks that work well for software RAID 5/6.
Claiming that the S2A approach is falling behind on IOPS compared to RAID 5 or RAID 6 arrays is like dissing an efficient and comfortable people mover because it can't post a blinding quarter mile. If you want high IOPS and "works until it doesn't" QOS, go with RAID 5 or 6. If you want end-to-end data integrity and streaming bandwidth, look at S2A or something based on ZFS with mirrors or RAIDZ[2].