Why exactly is that a problem? Would you rather have those components *not* fall in price?
25 posts • joined 23 Jul 2009
ASIS (which ain't ASIS any more... it's just NetApp De-Dupe) does not support global de-dupe, i.e. it does not de-dupe across arrays. So, I don't see any issue around the 'two pools of de-dupe' area between the legacy NTAP pools and the new Engenio pools. It is also suitable for all but the highest workloads and is well proven - it is one of the gems of WAFL. I expect introducing another de-dupe product into WAFL would be very disruptive and time consuming. Net, I expect them to leave de-dupe alone, perhaps just pinching some ideas from Albireo. A full replacement? I doubt it.
My bet is that they'll allow both to run in parallel for the time being. No big deal.
Moving the Oracle DB from Itanium / HPUX to Oracle on Intel / RedHat is pretty simple to do. Why would you go & port your whole DB to Informix or DB2? Just drop the same version of Oracle onto your favourite x86 box and watch your costs drop. HP has been gouging its Itanium customers on price for a while. If you really love HPUX then leave your custom compute processes on there and put the DB on a different box... a with decent network you won't notice the difference.
Its about time Oracle users on HPUX woke up to commodity x86.
So the software does change the data - presumably irrepairably - by downscaling image sizes in documents. This is crazy talk for automated enterprise use. Imagine the support calls - "Hey, Helpdesk! Who the heck reduced my high resolution image to a tiny jpeg?" Crazy talk, hence Paris.
A badly configured SAN environment will be every bit as crappy as a badly configured NAS environment. A badly configured NAS will be worse than a decent SAN. In fact, there is more to go wrong in a NAS environment (all that funky file system WAFL, support for all those protocols, and normally data transfer over a non-dedicated IP network). However, set up an enterprise grade NAS environment properly and you'll find a few things:
1. it's cheaper and more functional than an enterprise grade SAN environment
2. it'll have equivalent reliability to a decent SAN environment
3. it LOVES big database, particularly Oracle
4. it LOVES Vmware and Vmware LOVES it
5. with a dedicated 10GbE environment from server to switch it performs like a champ
Of course, if you set up the NAS environment poorly than a decent SAN will thrash it.
Don't make an industry wide comment based on your narrow experience.
You are right on the architecture and most financials do the same, particularly for messaging apps. However, these are not on separate spindles but rather separate LUNs.
Motherhood & apple pie explanation: the enterprise storage device your app runs on 'virtualises' the physical spindles by breaking them into thousands of small extents and then groups these extents into LUNs which the server uses. These LUNs may well have blocks of data on the same spindles. The same principle exists if you run on NFS from an enterprise filer.
We've been trying to educate DBAs to stop talking about spindles for 15 years...
I actually expect very large sequential workloads to be great on NetApps PAM cards. The predictive algorithm should allow just the active parts to exist in the read cache buffer. Where EMCs SSDs come into their own is for highly random reads or for extremely high writes, both of which are a challenge for a predictive cache engine. Both have a place, but with cache sizes ever increasing my bet is on the flashcache / PAM card / readzilla model for general adoption.
Well now, I'd just like to thank all the architectural posters on here who apparently know bugger all about edge cases. Thanks for the amusement.
Just because your app runs fine on your big fat general purpose EMC/HDS rig (as do 95% of mine) does not mean that there is no place for something different if the performance profile demands it. There must have been a v good reason for the airline going V-Series and TMS. It's about IOPs and latency. Ever tried getting 150k random IOPS out of a 2TB dataset? Try it with a VMAX or USP and you'll be short stroked to hell. Try using SSD in a VMAX or HDS and you'll flatten the DAs. TMS is tuned for performance over availability and management, but it is damn fast if you need that kind of throughput. Mitigate it with running NetApp's SyncMirror plus SnapMirror and you'll have an incredibly fast resilient rig that costs a bomb. But, if you need those 150k IOPs it works very well indeed.
So, just because *you* don't need it in your shop doesn't mean other folks don't.
Well, there is still Pillar. Although they have struggled to get into large enterprise, they now have a decent track record of service. Maybe that are an opportunity for Dell to pick up on the cheap and invest in. You can do a heck of a lot for $2bln... which was exactly my thought when NTAP lost out to EMC on the Data Domain deal. CommVault is the next one to go in my opinion.
Fowler doesn't make much of a play on the benefits of de-dupe which have a massive upside on long term data storage but realistically need random access. Nice fat SATA disks are lovely for it, particularly if you can build de-dupe pools and then spin then down. You can regularly health check the disks (which you can't really do with tape)... Can you imagine the access time on a 20TB tape? Or the shoe-shone? Horrible... Can you image the impact of losing an unprotected 20TB tape? Nasty... Folks will have to start RAIDing tape somehow (probably double write). All this is pretty obvious to the industry and not to Oracle.
I have to say that this announcement appears to be more about hyping Oracle assets than delivering a sensible product strategy and we've not even started talking about whether the industry actually wants Oracle's bespoke stack approach at all. Perhaps Fowler has forgotten what OpenSystems is all about.
True, VMware is an EMC company.... but it is held at arms length and left to make its own decisions on server and storage strategies. You will see joint CEO level briefings with VMware and NetApp, VMware and 3PAR, VMware and anyone who can lower the TCO of virtualising. I did the math myself: today NTAP with de-dupe over IP is cheaper and easier than VMAX or CLARiiON over FC. Sure, this will all change in the future as vendors leapfrog but today the punter is voting with his wallet. You've got to see the the positive side of this for EMC: VMware revenue is ballooning! Why would Tucci force a partnership between EMC and VMware that damages his golden goose? That's not smart, so for now he lets VMware partner with who ever is needed to drive the virtualisation market onwards.
Fast? I should say so. Just loaded it. I think (no science involved) it's 4x the speed. Lightening quick. Thank heavens you can turn off 'mobile view' by default (although some sites like www.telegraph.co.uk stuff push you to the mobile version). Scrolling is a bit less smooth than Safari but multipage management is a breeze. Overall, great first attempt.
IR35 was a legitimate reaction to wide spread tax abuse within the contracting industry. If the Tories win I shall lobby for certain parts of it to be retained. Contracting in the 1990s was a scam. So, your partner gets half the dividends? Laugh. You buy a motorbike on the company and sell it back to yourself at half cost within a year? Laugh. Your business LCD projector is 99% used for watching DVDs? Laugh. All a big long laugh... all the way to the bank. I do not doubt that contractors need allowances but the ability to pay dividends on 90% income... and claim every damn thing and then some? No way. As stated elsewhere, the bad guys spoiled it for the good guys. Problem is, ALL the contractors I knew were the bad guys and some still are.
7 years ago VTL was a big deal because you wanted a Virtual Tape Library to 'pretend' to be a real one. You got performance improvements, replication, etc, and most importantly easy integration because this disk thingy looked & smelt like a tape library. Heck, it even came with tape library personalities. Fast forward 7 years and all the decent backup products support IP or FC attached disk pools. Thus, you don't need all that tape library emulation and even if you DO want a real tape (long term archive, etc) you can easily duplicate from your disk pool to tape. So, NTAP has a pretty good de-duper on it's IP storage and thus you could argue that neither it (nor anyone else) really needs VTL.
Now, the problem I see is that NTAP has a post process engine in its primary filesystem and DataDomain has real time in its secondary filesystem. RT is a much easier sell these days, now that performance is good. But Sun's expected release of RT de-dupe in AmberRoad is a big deal to DataDomain and NTAP: it'll be the first RT de-dupe in a primary filesystem. Lets see how they respond to that.
The de-dupe market is certainly hotting up, but forget the VTL end of it. It's the fag end.
I've always liked 3PAR: great engineering, great functionality, good company ethos. The only downside was pricing. So, out this nice chunk of block technology into HP (please not Dell!) or Cisco and you will have the right distribution network driving higher volume, lower cost of sale, and ultimately lower overall unit cost. I can't see why HP under DaveD wouldn't want to do this.
No surprises here, except that HP enterprise storage sales have not declined even further. Why on earth would anyone buy EVA SAN or PolyServe NAS unless locked in to it? It's been under invested for years and is now generations behind CLARiiON, 3PAR and NetApp. If Donatelli wants to turn around his storage business then making EVA competitive must be his #1 priority... or maybe he'll ditch it as a bad lot and jump for 3PAR? This would be a tragedy in my opinion: it's not a bad product and comes from a great lineage, but simply does not cut it in todays highly competitive mid-range space.
Psymon - Drobo is actually pretty smart. If does not require same sized spindles. Instead, it uses an extent based RAID. As long as each extent is replicated off disk then you are OK. Thus, you can protect one 1TB drive with 2x500GB, etc. Believe me, the concept is smart. So smart that I bought one early this year. It's downfall is the code quality and support. It scared the crap out of me. I had a failing disk but it didn't tell me which disk was going bad. It just went off line for about 18 hours 'rebuilding' (which it is not supposed to do) and then came back fine showing all disks good. It did this a couple more times. The logs are not customer readable (WTF!?) so you raise a support case and wait 2 weeks for an answer. Meanwhile, you are exposed to a total data loss scenario while they work out disk disk is bad. No thanks! I sold the lot on ebay and built my own with external firewire 2TB disks.
The problem with most of these home NAS boxes is that the code fails more often than the disks. If EMC or NTAP put their CLARiiON or FAS code into a tiny home device I'd jump at it, but this low end code from QNAP, Thecus, etc, isn't stable enough and I'd rather use a scheduled mirror process (I use ChronoSync on my Mac server between two external 2TB drives) than blindly trust someone elses crappy code.
I think the days of the fat FC array are coming to an end. FCoE is a stepping stone to using NFS pretty much everywhere in the distributed space. Why bother with all that extra FC complexity & cost when you can run directly over NFS for the vast majority of applications? 10GbE and DCE gives you the bandwidth and and low latency... IP networks can be designed every bit as reliable as FC and you don't get that hideous interoperability mess that FC forces on you. So, who needs FC except for non-virtualised Windows apps? It's all going IP & NFS folks.
Fundamental to de-dupe is the premise that the data remains unchanged to the client. Sure, DataDomain & co may do cunning things in the background to remove identical blocks, but when you read the data you get back exactly what you had in the first place. Not so Ocarina as far as I can make out. Here, the data is actually changed at the presentation layer. JPEGs are downsized, etc. OK, that is a useful tool in spots but it's not generally applicable and is not de-dupe. Imagine the angst when someone goes to view a document and sees their high res JPEG replaced by a medium res one... I get the feeling that Playboy.com won't be using this tool.
I also wonder about the viability of using a third party tool created & supported by a tiny company to irreversibly change my data. Perhaps HP are happy to do this because hardly anyone buys their NAS stuff anyway? Personally I'd rather use real de-dupe and real compression... and get back my data exactly the way I wrote it. I'd also like to buy from a company that I am confident can support me. If all my Whitesnake MP3s get replaced by Winnie the Pooh GIFs I need to turn to a trustworthy company and not a startup!
For all those who think Drobo Mk2 is great because its not dumped on you yet / you've only had it for 2 days and love the cool look / read some reviews on line and trust them: listen to those who have been through the pain, been scarred, lost data, found other better solutions.
By the way, Drobo Mk1 appears to be less of a problem, based on the online material I have read / personal experience.
I had a DROBO Mk2 until very recently - running latest firmware. I'm glad I don't have it any more.
It was chronically slow over FireWire (any disk writes during media streaming would cause my movie to slow to a stop: useless!). Three weeks ago my device took itself off line for many hours - flashing amber & green lights - and then came back apparently happy as if nothing ever happened. After I recovered from the heart attack it settled down, but then my DROBO did the green / amber flashy thing (albeit for only 10 mins) several times in the following days. Again, no reason for doing so. Of course, I raised a call… and got no response from DROBO support after I mailed in my 'customer unreadable' log file. What is the point of a customer unreadable log file, for heavens sake?
Others I talk to have had the same problem with data unavailability and have run away from the product quickly. I also read startling web blogs from DROBO Mk2 owners who had lost everything during firmware upgrades or spurious multi-disk failures.
Wish I had known about this before buying and not just trusted the trade rags which only test a product for a few days.
So, why exactly should I trust my data to a product that is slow, unreliable and has non-existent support? Answer - I won't. Instead, I put in place a scheduled mirror process between two external FireWire 800 WD Studio disks which is a far faster solution and gives me confidence that if I lose one of them the other will be OK.
Like so many of these 'home storage solutions' the code they run is less reliable that the disks themselves. They run proprietary RAID algorithms or filesystem code which is no where near robust enough. The bottom line: don't trust your data to just one supposedly reliable product - it will fail, guaranteed. And make sure you read up on the reliability of the product before you jump in with cold cash. The web is full of horror stories about DROBOs suffering data unavailability or total data loss.
I was lucky - managed to get my data off without loss. Others have not been so lucky. Buyer beware.
Biting the hand that feeds IT © 1998–2021