3.1.2 InForm OS
The latest InForm OS Version will also be available on the F-Class and T-Class 3PAR Arrays, not sure about the S-Class Arrays, but definitly not the E-Class.
The Autonomy acquisition a year ago left Hewlett-Packard with a hangover, but buying storage suppliers 3PAR and IBRIX seems to have worked out without many hitches - and it's revitalized HP's storage biz. This time last year at the HP Discover customer and partner extravaganza in Vienna, the 3PAR, IBRIX, StoreOnce, and …
Really? If so that is something that was left out of my talks with them. Are you sure it's not file services provided by IBRIX ?
Still waiting for better info to show up on the HP site, I guess I have to wait a bit longer.
The biggest parts of the announcements IMO haven't been made clear yet, the devil is in the details. Not sure how many of the details are coming out today.
...but when I do I want you to shut up and take it.
I've been holding off on this VMware SAN upgrade project for a few weeks hoping for F-class replacements packing a SAS back-end and 2.5" drives and a controller spec on par with IBM V7000 and Dell SC8000. Touche, HP.
Not strictly true, the Compellent SC8000 controllers have double the CPU and double the RAM and since they're based on the PowerEdge R720, they have 6 PCIe slots for expansion. Also much more granular as far as adding storage, disks do not need to be added in fours or eights, I can buy a single drive if I want, which means its very easy to tack a drive here and there into project budgets (which is a problem I face). And controller upgrades are potentially easier since there is no chassis dependency, a problem we're facing with our 4 year old FAS3140.
But I like smart, efficient hardware. Custom ASICs are just that. And as a lower midrange customer, I know I'll never scratch the performance potential of a Compellent pair or quad-node 7400. The thought of a dual-node symmetric active-active 7400 for potential expandability with an extra shelf for block storage is pretty sweet, hits our capacity, performance, and probably pricing requirements perfectly.
Now to see if they'll bring out the starter kits like they had with the F-class.
Take that picture subtitled "The HP 3PAR StoreServ 7400 array" and compare it with the Dell c6220 (http://www.dell.com/us/enterprise/p/poweredge-c6220/pd). At least Dell had the idea of adding a few 2 socket Xeon servers behind that disk array.
Why do we then still need fibre channel fabric, fc switches etc., when the storage can be right on the PCI bus?
But don't take my word for it, have a look at GoogleFS, IBM FlexSystems and RedHat Storage Server.
Tell that to Dell Compellent ?
If you don't want to use FC you can use iSCSI.
Red hat storage server is an object based storage system - quite a limited one at that, I sat through a webinar on it a few weeks ago and felt really there is no reason for it to exist. It's so limited in it's abilities. They don't even offer snapshot support for example.
Distributed file systems often are not latency friendly. 3PAR is a very latency friendly storage architecture for disks, and to a lesser extent for SSDs.
There are different products for different workloads, one storage design can't rule them all. Really depends on what your running.
Certainly not obsolete though!
The Dell box isn't a storage array, it's 4 ultra dense servers each with their own discrete DAS storage. The 3PAR array you're trying to deride actually includes up to 4 storage controllers, each equipped with a 4 socket Xeon and the added twist of a 3PAR generation 4 ASIC to do the heavy lifting that the Xeons can't. Even on the PCI-E bus you still have protocol overhead from SAS and all the limitations DAS reintroduces to shared storage, which in turn require additional overheads to maintain availability such as file system mirroring etc. You need to take another look at a modern storage architecture, try here http://www.theregister.co.uk/2012/12/03/hp_3par_storeall_storeonce_storage/
Why are you talking so much about overhead? Are you scaremongering? Let's assume you have a x86 system with 48 Xeon cores, each core having two threads. With 96 virtual processors available, where do you see the bottleneck? With this commodity computing why do you need a separate storage controller? It's rather the opposite: Software RAID may employ more sophisticated algorithms than hardware RAID implementations and thus, may be capable of better performance (http://en.wikipedia.org/wiki/Software_RAID#Software-based_RAID). If Hadoop has taught us anything, it is that getting compute and storage on the same physical devices can substantially boost performance.
If now RedHat is combining the KVM virtualization with the Gluster distributed filesystem and is promoting freedom from proprietary storage I see DAS can provide much more functionality the cash gulping SAN solutions from any vendor cannot provide for the same amount of money.
As said before, look at Google, RHSS, IBM FlexSystem, and HPC at Cambridge (http://www.hpc.cam.ac.uk/services/darwin.html)
If you want to take a look at modern storage architecture, go on a Linux course, and learn about GlusterFS, Ceph, FhGFS, et al. (http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_parallel_fault-tolerant_file_systems)
For me, SAN is dead.
No not scaremongering, if anything I'd say it's you who's trying to distort the facts for you own purpose. You pop up every other storage article and spout this clap trap about DAS, all of the solutions you mention are clustered file systems not storage architectures, and that's where you show your lack of knowledge and experience. Clustering only solves part of the equation unless you're willing and can afford to create separate failure domains by deploying multiple redundant copies of data, and most small enterprises are not in that position, what you save in tin you'll spend on environmentals..
Who is distoring the facts? I paste in links and proofs you're just placing subjective statements. Here we see it again:
"Clustering only solves part of the equation unless you're willing and can afford to create separate failure domains by deploying multiple redundant copies of data" - what equation? Resilience and business continuity planning? Now that starts with business process optimization (RTO, RPO etc), and fallback to analogue, if possible. Then via load balancing clustered web, application, RDBMS systems we eventually arrive at fault tolerant storage (mentioned above), backup and archive, where we close the circle with RTA. In this equation there is a lot more than just storage, btw.
"and most small enterprises are not in that position" - well, if there is a SME which has limited funds why not using commodity hardware and open software solutions, rather than throwing money at a proprietary vendor? (http://www.redhat.com/products/storage-server/on-premise/; https://help.ubuntu.com/12.04/serverguide/drbd.html)
Whatever, man, at least I Dare to Think out of the box!
Oh Pleeease how is pasting a link providing proofs, throwing three letter acronyms at the problem isn't helping your case either, In the 1 minute it took to read your post, I forgot more about storage than you'll ever know. Take your half arsed COTS solutions and go troll somewhere else. I'll be selling bucket loads of these whilst your trying to massage a working solution out of the commodity junk your suggesting we all attempt to assemble. Time is money in business, so maybe it's you who should dare to think.
Please explain how I run my SQL, Exchange, CIFs File serving, Hyper-V, VMware environments on any of the above platforms, and who provides the qualification, integration and ongoing support. Oh that would be me then....until you can square that circle I'll stick with the experts.
".....For those customers who are familiar with HP's EVA arrays, which are missing the thin provisioning, remote copy, peer motion, and other functions that 3PAR arrays have...." OK, I know the hp salesgrunts want to tell you EVA is not dead but 3PAR is the answer, so what's your question, but I think you should stick to making announcements about mainframes, TPM, or do some background reading first. Thin Provisioning has been available on EVA for a while, and Continuous Access (equivalent to Remote Copy) for as long as I can remember. EVA 6000 software details can be found here to get you started (http://h18006.www1.hp.com/products/quickspecs/13905_div/13905_div.HTML).
LSI/NetApp has thin-provisioning via their DPM product... which HP has bolted on EVA as a stop-gap. Continuous Access is pretty bad on EVA. There are a whole host of caveats with async, e.g. don't use WAN acceleration, turn off IP acceleration, etc. Snaps still are copy on write, which is way more disruptive and storage intensive than the redirect on write from IBM and NetApp.
".....which HP has bolted on EVA as a stop-gap...." Whilst I'd argue the "stop-gap" bit, seeing as LSI build the controllers for the EVAs IIRC, but I see you don't try and argue that it is not a working thin solution.
"......Continuous Access is pretty bad on EVA......" Oh puh-lease! Just go and look at the vast number of EVAs supplying mission-critical levels of availability all over the World. It is a very trusted, easy to implement and popular solution and one I have worked with at several companies in the FTSE 100. It may not be fashionable with hp's salesgrunts (I suspect they get more commission for flogging the all-hp 3PAR rather than the EVA with its OEMed components), but I suspect I won't be the only customer that misses the old EVA when it does finally get retired.
".....Snaps still are copy on write, which is way more" reliable and gives better data integrity. I forgot to mention earlier with the thin bit that EVA can also create thin provisioned snaps, thanks for reminding me.
".....way more disruptive and storage intensive than the redirect on write from IBM and NetApp." Except when you delete a snapshot, which is when you have to reconcile the original data with all the changes you wrote into the redirect space, i.e., the original data has to stand still whilst ALL the writes that have happened since the snap was taken have to be written back into the main copy - very disruptive and intensive, much so than write-on-copy. Oh, did you not have that bit in your FUD guide?
"Except when you delete a snapshot, which is when you have to reconcile the original data with all the changes you wrote into the redirect space, i.e., the original data has to stand still whilst ALL the writes that have happened since the snap was taken have to be written back into the main copy - very disruptive and intensive, much so than write-on-copy. Oh, did you not have that bit in your FUD guide?"
That is not the case, at least in IBM's redirect on write. The redirect is a time based differential on the master. When you delete the redirect snap, the differential goes away. Nothing changes with the master copy. The master copy doesn't even know the snap exists. That is why snaps take practically no time to create (you are creating a logical pointer with no data writes), no performance degradation and no limits on snap functionality (e.g. no read only or master dependencies for the snap).
"That is not the case, at least in IBM's redirect on write. The redirect is a time based differential on the master....." OK, just so we're clear we're discussing the same terms, I'll define copy-on-write and redirect-on-write, then you tell me if you agree. Probably best seeing as the storage industry, like servers, often uses similar terms with slightly different definitions.
Copy-on-write has the following three process steps for each change in data:
1. Read location of original data block (1 x read I/O).
2. Copy this data block to new unused location (1 x write I/O).
3. Write the new and modified data block to the location of original data block (1 x write I/O).
Copy-on-write puts the data in contiguous chunks on the disks but you actually have two copies of the data, the original (safely stored in case you want to roll back) and the changed, and any time you want to read the data you already know where the latest up-to-date copy is because it's where it was before, all contiguous. That makes subsequent data read operations on the stored data faster, and we all know it's a race to get data OUT of the array to the CPUs, whereas pushing it back afterwards can be handled by cache.
Redirect-on-write has two steps:
1. Read location of original data block (1 x read I/O).
2.Write modified data block to new location (1 x write I/O).
NetApp WAFL and the copycat Oracle ZFS use redirect-on-write in an attempt to push up immediate write performance. Please note that, as you state, the snap is a PiT (point-in-time) copy and the actual whole data picture is the unchanged original data PLUS all the changes. This introduces the big problem with chuck-it-anywhere filesystems - the data is not contiguous but is all over the place. As the filesystem fills up the heads are dancing all over the place as to read that data requires them to go get all the redirects, meaning that the CPUs do not get the data out as fast for subsequent data operations. When you want to delete your snap you will lose all the changes, which the redirect is the only record of, unless you write them back into the main copy. Thus redirect-on-write is a save-now-but-pay-more-later solution that leads to chronic performance issues which worsen as the array fills up.
Now, I understand IBM try something slightly different with XIV where they use a dedicated savvol and write the redirects to the savvol rather than scattering them at random. Whilst better than pure redirects-on-write, this still leaves the problems of the heads jumping between the two spots to get the full data set, and also brings up reservations - XIV by default saves an extra 10% for for any volume for the snapshot. If your data changes at a high rate you will need to allocate more space for the savvol or run out of room in that 10%. Which is why XIV admins end up scratching their heads and saying "I was sure I had more disk left than that?" They also have the problem of having to reclaim data and space by writing the changed data from the savvol back into the contiguous disk after deletion, it's juts they pretend it happens in the background with no overhead (yeah, by magic pixie dust!).
LSI DPM went end of life over 2 years ago from HP way before Netapp bought their other non Ontapp platform. Nor is it what EVA uses for Thin Provisioning or Continuous Access, your information is hopelessly out of date as are your caveats on EVA use. EVA was introduced 2000 Netapp & VNX were very early 90's so from a software architecture perspective those platforms are really showing their age.
The good news is HP are making it really, really easy for EVA Customers to move onto a modern high performance and feature rich array family.
".....a complete rip and replace of everything?" What a tragically stupid thing to say! You're replacing an old array with a new one, what do you think will happen? If it's an old EVA and you get a brain injury and decide to replace it with a NetApp device you still have to rip out the EVA and replace it with the new NetApp. Same goes for a new EMC device, only you're probably less likely to be accused of having brain damage. The only difference is hp have implemented a software solution to make sucking data off the EVA onto the 3PAR a lot easier as they can plug into both bits of their own software a lot easier than NetApp or EMC.
Matt, totally agree, if you're replacing an array then of course it's a 'rip and replace', but with the EVA to 3PAR migration there's less of the 'rip'! The migration is actually driven through EVA Command View so is very straightforward for EVA customers. It's actually easier than if you were replacing an EVA with another EVA. People love to throw FUD around, would they be calling it a 'rip and replace' if 3PAR was called EVA Gen 6 (or whatever).
"Matt, totally agree, if you're replacing an array then of course it's a 'rip and replace', but with the EVA to 3PAR migration there's less of the 'rip'! "
Yes, well, I am sure that 3PAR has some slick migration tools as many storage arrays do, but you are still moving from technology X to technology Y with no license retains, no skills transfer, completely new storage architecture and technology.... It would be no different than if 3PAR was still an independent company and you decided to move from HP EVA to 3PAR. Point being, there is no reason that people should give 3PAR special consideration because they have EVA in place. It is a complete replacement of the current technology... not an upgrade or a next gen.
The fact that there is a self service, non-disruptive, and no cost migration method is a big plus. And the licenses on EVA were a few grand not huge chunks of money like on some other arrays!!
And if you're worried that having mastered your EVA (which probably took 1 day), people will struggle to learn to master the 3PAR (probably 1 more day) then you are probably the sort of person who still points at airplanes!
"....I am sure that 3PAR has some slick migration tools...." That seems to be migration tools that connect to the Command View management software on the EVA and migrate it "seamlessly" (sorry, can't use thw word without sarcastic air-quotes). And seeing as hp wrote the tool for the migration and hp wrote Command View it would seem obvious they would probably do a better job than someone from another party.
".....there is no reason that people should give 3PAR special consideration...." I'd translate that as "please consider buying my kit, please consider buying my kit".
I'd say they definitely should give special consideration to 3PAR. Not only does it provide built in simplified migration, including end to end support throughout the process. 3PAR is also the best architecture on the market today and is now competitively priced from SMB through Enterprise, whats not to like.
".....a complete rip and replace of everything?" What a tragically stupid thing to say!"
Yes, but if you go from a DMX-4 to a VMAX, agree that it would be a strange decision, you can retain licenses, you can retain skills and you can keep the architecture more of less the same with better storage performance.
If you move from EVA to 3PAR (that is the forced march), none of the above applies.
Really ! and how much do the EMC migration licensing and services cost for that DMX to VMAX forklift upgrade ?
As per the article 3PAR shares DNA with EVA so if you're comfortable with EVA then 3PAR will be easy, so you can retain your skills but improve and future proof your architecture whilst increasing performance functionality and efficiency.
All HP 3PAR StoreServ 7000's come with a 180 day Online Import License for EVA's and it really is do it yourself, but if you would like some hand holding, then HP or your reseller will assist. Also the migration is driven from the EVA CommandView interface allowing existing skill sets to be utilised throughout the process..
There is no forced march HP will continue to sell and support EVA at least through 2013 with new upgrades and functionality on the horizon. In addition to this HP will also provide a minimum 5 years support guarantee beyond end of life announcement for all storage products. I still have supported Customers who are just starting to decommission and replace EVA5000's.
Customers can choose the simple, risk free HP 3PAR route to a single modern architecture that scales from SMB through SME to High End Enterprise. Alternatively they could choose to remain on EVA and then move to one of the older, more traditional architectures, from other Vendors, Which no doubt, given the age of such architectures will impose a forklift upgrade on them in the very near future.
I think you'll find pretty much all of the above and more apply, barring the license transfer. Which in the EMC world would be easily offset by Migration Services and Licensing anyway.
".....agree that it would be a strange decision...." So you admit your attempt at justification of a bizarre argument is already unlikely even before you suggest it, so why suggest it?
".....you can retain licenses....." The hp salesgrunts that have been round doing the rah-rah-go-3PAR routine have said they can refund the cost of any existing licences on support against the new 3PAR ones. I suspect if I threaten them with EMC they'll also give me a trade-in on any old EVA frames on top of the tasty discount we get. And they'll do all the changes to the existing support contract in the background rather than me having to set up a new one with another vendor. I also get to keep the same local engineers I have already beaten our processes into and who already understand how our business systems function, so if we do have a serious storage issue they already know how it affects our business. Sorry, retaining licences is pretty low on the value-add list.
Not 100% on the integral file services, but you can put a 2012 gateway in front of 3PAR and provision storage directly from the Windows interface via the SMI-S provider built into the array. WS or WSS2012 with a reasonable amount of memory is very, very scalable with plenty of new file server specific features and you can scale the two (block and file) independently.
Googoobaby, I think they meant to say "yes, it's a Win Storage server or cluster of such servers in front of the array", it's just it came out in defensive mode. Personally, I don't know why hp are so defensive over the whole NAS-header thing, at least I can do anti-virus directly on the header, unlike NetApp.
There's a whitepaper cleverly hidden by HP that shows a single X3800 with WSS2008 R2 scaling to over 25,000 concurrent CIFS users. The bottleneck in the benchmark is actually the back end disks, CPU and Memory were somewhere around the 50% mark. So with the scalability improvements in 2012 and the new Gen 8 hardware scaling shouldn't be an issue..
See page 20 onwards http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-4655ENW.pdf
The new 3PAR Storeserv 7200 and 7400 systems use 2.5" and 3.5" SAS disks, but the controllers support only 8Gb FC and 10Gb iSCSI/FCoE. Four 8Gb FC ports are embedded, the two available x8 PCI Express Gen1 slots can be populated with two 4-port 8Gb FC cards, two 2-port 10Gb Ethernet cards or a mixture of them. But where are the SAS adapters for the backend ?
I found the following answer in the brochure "HP 3PAR Utility Storage Objection Handling Guide" : HP will support the ability to add SAS drives to a customer’s existing array alongside their Fibre Channel and enterprise SATA drives, protecting their investment in these models. This means that SAS, SSD, FC and Nearline drives can be mixed within the same drive chassis. SAS drives will be compatible with the same 4U, 40 drive chassis that is used today for FC drives. The chassis will use a chip that translates SAS to FC so the signal to the array remains FC. The 4 drive SAS magazine looks just like the current FC magazine.
So the 3PAR backend is still FC ( perhaps only 4Gb FC-AL ). Each backend drive in the enclosures can be reached only over two FC paths which is a poor implementation compared with modern SAS backend solutions.
Direct SAS implementations use two 4-path bundles of 6Gb SAS so each drive can be accessed by eight SAS Paths. This is implemented with SAS path controllers and SAS expanders, for example look at
Please note that the StoreServ 7200, 7400, 10400 und 10800 controllers can not be equipped with 16Gb FC in the future because the PCI Express Gen1 backbone ( three x8 PCIe slots per ASIC ) is to slow.
So you didn't spot the 2 x 4-lane 6Gbit/s SAS per controller for drive connectivity then? I guess that means that you endorse the StoreServ approach as a "modern SAS backend solution".
Get your facts straight. The brochure you're quoting from relates to the P10000's which are FC to the drive trays because service providers like to not have to leave free racks for expansion! The new models are SAS back end.
As for front end connectivity, 24 x 8Gb FC will be plenty for most people!!
So you've taken a HP internal document and are now deliberately misrepresenting the data within it, in order to spread some rather feeble FUD. My how desperate the competition is becoming, but I'm afraid your FUD is incorrect.
The SAS adapters are on board each node on the 7000 series, see if you can't find a diagram somewhere.
The 10,000 series uses the magazine system, the beauty of which is it allows 3PAR Customers to mix differing drive technologies without having to replace drive enclosures. So existing 3PAR Customers will be able to mix FC SAS, SATA and SAS-Midline drives in the same enclosure. In fact in terms of performance FC & SAS drives are identical as are SATA & SAS-MDL so why differentiate based on the back end interface?
Instead 3PAR Customers don't have to create separate pools based on the back end interface wasting space in the process, from a management and maintenance perspective they're treated as identical drive types. This in turn simplifies tech refreshes, enabling a seamless transition from FC to SAS and SATA to SAS-MDL. The document you're referring to makes it plain this support will be available for existing Customers and makes no mention of the 7000 series.
On the 10,000 series the back end enclosures are connected via FC, but there's a very good reason for that. FC allows additional enclosures and racks to sit on non adjacent floor tiles, meaning you don't have to reserve space upfront within the Data Centre for ongoing expansion. You should know this as EMC now advertise this exact same feature as a selling point of VMAX. Whereas a copper based SAS back end would limit this distance to ~15 Metres which is fine for midrange, but at the high end FC will go 100 Metres, quit important for a system that can scale to 7 racks and 1920 disks.
In terms of upgrading the front end ports, there are plenty available on all four systems, 24 x 8Gb on the 7400 & 192 x 8Gb on the 10800. If you're going to double the front end bandwidth then you'd better have the ability to also double the back end bandwidth. Otherwise you'll have a severely unbalanced system and will have just wasted your money. So I'd have to assume that when you propose a Customer add 16Gb at the front end, you also recommend they replace all of their back end adapters...I thought not.
Since you obviously don't even know what the back end of the new box looks like I think we can take the remainder of you FUD with the pinch of salt it deserves.
Biting the hand that feeds IT © 1998–2022