* Posts by storageer

5 publicly visible posts • joined 19 Feb 2010

Australian Tax Office's HPE SAN failed twice in slightly different ways

storageer

Ever heard of Virtual Instruments

Storage and SAN/NAS performance monitoring products are the focus on Virtual Instruments and their VirtualWisdom products. It's what major players like AT&T, Sprint, T-Mobile, PayPal, eTrade, MetLife, Nationwide, Salesforce, Expedia and many US Government agencies (including the IRS) use to proactively avoid outages by being alerted to problems before they become outages. Yes, I do work for VI, but 90% of SAN/NAS-storage related issues can easily be avoided as nearly 400 of our customers know well.

IO, IO, it's profiling we do: Nimble architect talks flash storage tests

storageer

It's now easy to use your own production workloads when evalauting storage vendors

The vendors do bring up good points and both of their approaches have pros & cons. The bigger point that is not fully discussed is that storage buyers must test with their own application workloads. They should not be using outdated benchmarks like SPC/TPC or even tests run by the vendors themselves who can “game” the benchmark. They need to use products like the Virtual Instruments Load Dynamix storage performance analysis and testing platform. It’s the only professional vendor-independent load testing solution that extracts real-world production workload data from your data center. You can literally replay your production workloads in a test lab environment and load test them against any vendor, product, or configuration. Load Dynamix users have done 100s of “bake-offs”, often with our help, and the results vary by 5x or more between vendors on identical workloads. There is even a free cloud-based service that offers workload profiling of your production networked storage for free at workloadcentral.com.

The TPC-C/SPC-1 storage benchmarks are screwed. You know what we need?

storageer

The VI Load Dynamix approach offers true real-world application workload I/O profiles

What Howard is creating can have real value for smaller to mid-size shops. Sizing storage systems from a performance perspective has always been a black art for this class of users. Benchmarks like SPC/TPC simply don’t reflect YOUR applications or YOUR use of storage. Workload I/O profiles vary immensely, even for the same core applications (think Oracle, SQL Server). To get a free workload analysis of your current production I/O profiles, you can simply visit WorkloadCentral.com, which offers free workload analysis and sample workload models. These are based on the Virtual Instruments Load DynamiX approach, where I work. As Howard implied, the Load DynamiX workload analysis, modeling and load generation products are the Gold Standard for the industry. They are used by companies such as AT&T, PayPal, T-Mobile, New York Life, NTT, BNY Mellon, Cisco, Boeing and United Healthcare, Cerner, Softlayer and LinkedIn to name a few.

Although not inexpensive and mostly designed for F1000 companies and service providers, the VI LDX offering includes complete, automated production workload acquisition. It captures your existing production storage workload profile data with ~99% accuracy and repeatability for any simple or complex workload mix. It accomplishes this using a real-time, network attached sensor or via an offline “Workload Data Importer” software tool. It is the ONLY way to truly capture YOUR workload profiles. That’s why so many storage engineers/architects have turned to these products. HP, EMC and HDS (and their VARs) are all resellers of the VI LDX products, so they are widely available.

If your storage infrastructure is relatively small, perhaps you spend around $100,000 a year on storage as Howard mentioned, then Howard’s tool should be a viable option once it is completed. If you are spending $500K or more per year, then investing in a professional solution like the VI Load DynamiX platform will have immediate ROI as most LDX POCs are conducted in a few days, not weeks or months and they include full reporting and visual analytics capabilities in addition to the automated workload capture, modeling and testing platform.

NetApp says tiering is dying

storageer

Self-serving NetApp?

To think that storage tiering is dying is like calling the earth flat. It shows a lack of undertsanding, in this case of the high-end storage market, which is not surprising given the source of this comment. It may die in about 10+ years, but not any sooner. It is just now being implemented in most large IT shops. The storage industry moves at glaciel speed, especially when adopting new technologies. SSD adoption has been minimal today due to a variety of reasons and will still be minimal years from now - primarily for cost delta reasons relative to HDDs. Tiering, especially performance based tiering, either based on read/write latency or IOPs is a core storage capability that will be implemented broadly by all large IT shops. Given that NetApp has no equivalent automated performance-based tiering capability, it is not surprising that they would bash the tiering concept. Self-serving to say the least.

With respect to the Scale-out comments, NetApp has to be really upset that IBM did not choose to build their SONAS around OnTap8. As NetApp's largest NAS OEM, IBM had full access to all the detail around OT8 and surely evaluated it in great detail. This is a embarrasment to NetApp and hints at the end of the NetApp/IBM relationship given the substantial overlap in products. Witness the Dell/EMC relationship to see how this will play out. Given it has taken 7 years for NetApp to bring OT8 out after the Spinnaker acquisition, its not surprising IBM lost faith in them.