Testing Procedures?
Does Google really have different tools to reveal such bugs? Or are they available to Epic as well? Then, why did the bug slipped through the procedures? Epic can only blame themselves.
21 publicly visible posts • joined 19 May 2010
"Commvault argues that these four products span a range of applications that other companies need multiple products to cover." ... yes, they need maybe 20 - this is what CV needed too before merging them into 4 - still 20 products at all? I really feel bad for my two large german customers trying to get an individual License agreement with CV for a year or so. Customer Feedback: "no one at CV knows their portfolio and licensing, it is nearly impossible to deal with them". Now they will start negotiations from the beginning....
Sales Engineer IS a great job - at Netapp :-) I have many solutions to real business challenges, many stories to tell :-)
But back to the matter:
"ONTAP and WAFL handle data very well, but the basic file system underpinnings are built for spinning media." - Yes - and why is this bad? Most of our customers Data is till located on spinning media. And most your customers Data is there too I assume - derived from your technology: SSDs are only day-fresh accelerator, spinning media for longterm storage. And your CASL is not optimized for the lion's share of your customers data? Ai-ai-ai...
"We think keeping snapshots on the most expensive storage possible is not an awesome idea when they can be kept more efficiently on big, fat, cheap 7200 RPM platters." - Totally agree - this is why we keep Snapshots with a short retention locally (and not on SSD as you obviously assume). But we drove this one important step further many years ago: we replicate old snapshots to smaller, larger or even systems, different Disks (typically large SATA, happy with SSD-topping) or also single controllers (with 5x9 affordable service) for longtime-retention. This is what WE can and no one else. And this is a full-backup of all your data, every 30 Minutes replicated 20 km away and kept there for 6 month up to ~ a year - typical commercial customer setup. Sorry for boring you but again I need to mention that all replicated Snapshots on the target system(s) can still be used for restore, test/dev/forensincs and, and, and...
Bringing this to customers attention let them put the millisecond blabla aside typically because this is relevant, business process supporting functionality :-)
One last thing: you already drive a Porsche? I already doubted the generation of revenue before. What do the Investors say about that? Burning capital that way is maybe a shortterm business-model...
Have a nice weekend
from Germany
1: they cut the price in halves just with their final offer, discounts were already low. No Idea how they generate revenue this way - and customers going with this are not my favourites (revenues pay employees, replacement parts logistics, back office, service guys and so on)
2: they offered synchronous mirror "just like Metro Cluster" but available in 2016 “I swear" - with no additional costs now and later. First they sold a product that does not even exists, second I still wonder how they generate revenue (see 1:) - sounds more like getting streets attention amongst all the other flash-microsecond-you-need-only-that-one-trick-pony-shacks
3: they insisted so much on millisecond write latency at a prospect that they forgot to listen to customers read-intensive environment. The IT guy later told us that this was ridicoulous and a strategic story was missing completely - there was always a u-turn to the pony.
And by the way reading through Nimbles Datasheet at
http://info.nimblestorage.com/rs/nimblestorage/images/nimblestorage_technology_overview.pdf
it shows all Netapp Technologies, well maybe only a 1/256th of them but all well known, absolutely nothing special.
No need to be scared for me, Nimble.
Disclosure: Netapp Systems Engineer and proud of it :-)
I'm running the EVO 840 in my late 2009 iMAC with Adapter Bracket. It runs extremely fast, all random and sequential Task like MP3/Video Encoding, very large Photoshop Files editing and multiple VMs at once. Booting into Yosemite from Poweron takes ~ 10 seconds for a working desktop! So I don't see an advantage for Windows here.
Oracle claims 640 KB granularitiy as "most efficient data granularity" - well, Netapp has 4 KB granularity for Snapshots, SSD- and Card-Caching, replication and so on. So NetApp is 160x more efficient than Oracle and 64.000x more efficient than the other competitors? Not that bad...
Oracle says "[Columnar] Compression is supported only on Oracle storage systems". This is the case only for some years now. It used to be supported before they bought SUN. Why? It is a software feature! It works with every 2.5" IDE/SATA/SAS/whatever Disk.
Disclosure: NetApp Empl.
is a Netapp. Chris, I'm disappointed that you're not aware of Netapp's most basic functionality. We do dedup in primary storage for more than 7 years now. And we can replicate all data and Snapshots deduped to a second and even third array. So absolutely no need for a dedup Appliance (and moving terabytes around for hours during backup) as everything is already deduped (and by the way: compressed too). And if you have a different primary storage and look for a dedup appliance for backup with heavy added value - go for syncsort with Netapp, it rocks :-)
NetApp does this on top of just stuffing virtual machines with storage:
- replicating data around on their gear, small to medium to large boxes and vice-versa > consolidation, intercontinental data protection - multiple topologies, old to new boxes and vv
- doing hundreds of snapshots on all data > uncomparable RPO, massive testing and development environments setup quick means excellent time to market, top sla's
- doing metrocluster transparent site failover in case of hardware component error > uptime increased
- having integration software for snapshots/clones for many major business apps > ease of use
- having some thousand engineers worldwide to provide worldwide 24/7 support for all class of business
- virtualizing other vendor storage to integrate it into new netapp feature ecosystem > investment protection
- delivering block and file storage from one system with the complete feature set below it (snaps, dedup, compression, worm)in all system families >> true unified storage, nobody else has it
btw, who were this tintri-guys?
I have a vision....
CIO: we just bought that 3 million SAP Licenses and Consulting so storage must be cheap - could't get a discount with SAP
Admin: but...
CIO: cheap! every disk is the same!
Admin: buys FATA Array, "cheap".
Years later, SAP performance hurts business much, new CIO in place. The former went on to new challenges - he is known for implementing SAP at low cost...
new CIO: my iPAD has flash memory - at least I read so - and it runs fast --- so why don't we buy flash for SAP?
Admin (still the same poor Guy): yes, but it is very expens...
CIO: buy it! I want iPad style performance in SAP!
....
Even if they claim to be one of the largest sw manufacturers. I worked for emc for 7 years as a consultant and i saw navisphere, control center, symmetrix manager (which was already outsourced from cc very early cause the couldnt get cc to control their main storage, the same with navisphere) powerpath, geospan/srdf-ce and so on. They all sucked, were more then heavy to install and support was bloody bad even for internals. Its still the same. Emc cant do usable storage management software. The only stuff that was and is working are their command line tools - not very 2011.... Have a look at Dells storage or others shows you where to buy.
I.e. When upgrading CC in minor or major steps all historical data was lost! This was fact from 5.0 to 6.0 over several version and couldnt be fixed although there was an oracle db below...they needed to drop it. Customers liked this.....
Performance management is historical and ancient - for clariion and vmax. Collect data and get a zip file. Visualize the data in a sucking java gui that slows down even the fattest ram/cpu combo. Customers still complain about this.
I would never buy an emc system again, others just can do muuuch better.
Assume that in the future these drives will mostly be used in tiered storage scenarios to provide suffcient io in front of a sata bunch (or marketing-speech "nearline-sas", sounds faster but isnt).
Then writes will go down on ssd but some algorithm will stage data to tier 2/3 from time to time. so the ssd will never fill up to its maximum. Lets say half the disk will be filled before the data is staged. You end up with 36.000+ ios per cell, right? Does it mean it will just last 2.5 years?
If my Name creation Guy would Name my newest hispeed Disk System After a MODEM from the stoneage I would Know what to do.... And If you ask the 25 year old hiflyer Sales guy about this He doesn't even Know about this....
Whatever, Renaming a product is often Done because of a Bad Reputation of the product....