Simply another useful analysis input.
I come form the 'More information is better' school and welcome ALL benchmarks that include published details.
Yes it would be useful to include pricing metrics - but honestly these change from the date of publication quite rapidly, and differ internationally with local pricing affected by shipping, exchange and taxes.
I have used spec benchmarks (since LADDIS) since they began and they have ALWAYS provided a good insight into comparative performance and sizing. They also give a flavour within a vendor of their range and highlight what is needed to get best results.
So the SPEC NFS was not broken when NetApp published a Million IOPS in May 2006 http://www.spec.org/sfs97r1/results/res2006q2/sfs97r1-20060522-00263.html
AND it is certainly not broken now when EMC published half that 5 years later on a newer version of the benchmark. (yes I know they are not comparable - but for marketing a million trumps half a million).
The item above highlights the problem - the marketing use which is purely aimed at producing a winner, which is only relevant to the single biggest baddest configuration which has been lashed together.
Properly using the benchmarks provides access to information on entry level boxes, comparisons on midrange and comparisons on the use of flash, fast and slow disk and differing caching approaches including 'super cache extensions' using flash.
Despite the fuss about the 'pure flash' entry - it is simply another documented entry to compare - and gives a good reality check to what you will get from those expensive drives.