* Posts by Jason Ozolins

102 publicly visible posts • joined 19 Nov 2009

Page:

Data Direct offers native file system

Jason Ozolins
Headmaster

Lower IOPS inherent with S2A, not necessarily a problem

The S2A approach trades IOPS for strong guarantees on achievable streaming bandwidth and data integrity. All the disks are organised as 10 disk ECC striped virtual disks (think 8+2 RAID 6, but with ECC instead of simple parity, and with 512 *byte* stripe segments); every access is a full stripe write or read, with the ECC always written and read. It is obvious that the achievable small random read IOPS with that approach is 1/8 of that achievable with 8+2 RAID 6, and the small random write IOPS will be 3/8 of 8+2 RAID 6.

Why would they set it up this way? Well if you were after streaming bandwidth rather than IOPS, you were going to be issuing full stripe reads and writes in any case, and this way you pay no penalty when up to two disks in any virtual disk pack in or hold a go slow. You wouldn't put a transaction processing database on it unless you were desperate or stupid because that's not what it was designed to do.

Read/modify/write cycles don't tend to happen with S2A because any modern filesystem is writing data in 4k or greater chunks, and a full stripe just happens to hold 8 data sectors, which is 4k of data. FPGAs are great at slicing and dicing data in fiddly ways - they are fine with doing ECC on sector chunks as opposed to the the larger chunks that work well for software RAID 5/6.

Claiming that the S2A approach is falling behind on IOPS compared to RAID 5 or RAID 6 arrays is like dissing an efficient and comfortable people mover because it can't post a blinding quarter mile. If you want high IOPS and "works until it doesn't" QOS, go with RAID 5 or 6. If you want end-to-end data integrity and streaming bandwidth, look at S2A or something based on ZFS with mirrors or RAIDZ[2].

Windows 7's dirty secrets revealed

Jason Ozolins

typo needs fixing?

Doug Glass writes:

So the "permanent fix" is not to create a solution but rather a built-in work around that simply covers up the problem (after a few tries) and makes it looks like all is well? I'd be ashamed to admit that.

- If you read the "Old New Thing" blog linked by another comment, then you'll get an idea for what MS is up against ; they don't want to get blamed when crappy software breaks after MS changes how an interface is implemented. An example seen on more than one OS is that if you change memory management to trap buffer overruns , it'll break badly written code. [*cough* Mozilla on OpenBSD *cough*] Who gets blamed, and who is supposed to fix the problem? If MS is the one getting blamed because "it worked just fine under the previous version of Windows", and even better if the program is not being actively maintained by the ISV, then naturally MS will try to reduce the incidence of such failures.

They could of course take a principled stance; maybe throw up a dialog saying "sorry, this crappy program died because it was written by clueless programmers; you really ought to update it or give it the flick"... with a single button labelled "oh, OK then" to dismiss the dialog. How happy do you think the users, or indeed the ISVs would be with that approach? You have to keep all those "developers, developers, developers" writing stuff for your platform, even if some of them are not very good at coding.

I personally feel that virtualizing old environments for crappy software to run in is a more attractive option than keeping all the workarounds in the code base of the current OS, but maybe that's unrealistic...

Page: