Taking the high road - well done, Kiran!
Rob
15 publicly visible posts • joined 6 Aug 2013
Rob from Tegile here.
I completely agree with you ArrZarr
I've had countless discussions with Gartner on this one. We agree that the SSA and GPDA should merge. Let the Critical Capabilities Report sort out where individual arrays fit in various workloads.
Economics will resolve this soon, but hang on to your hat - there are other product segments Gartner reports on that I have to imagine will warrant their own MQ. Think Object Storage, Shared Memory Storage, the list goes on.
Rob
True - people hear "hybrid" and they think SSD+HDD. But step back a moment, and think "hybrid" = fast tier + slow tier. This makes room for the notion of a tier of let's say NVMe SSD in front of cheap and deep SSD. Recently, our industry friend, Enrico Signoretti, posted a blog on Juku.it titled "Your next all-flash array will be a hybrid" - he nailed it. Those systems that are designed for multiple tiers of media can make any data management headaches (and redundant costs) between the tiers "go away".
Chiming in from the vendor side, I am sad to say that I agree with the survey respondents. Having been in storage marketing for over 20 years, I've never seen vendor FUD and bashing so bad as it is these days. I fear as the game will change again with NVMe, the FUD and bashing will only get worse. I hope I'm wrong.
Hi Chris - Rob at Tegile here.
Today’s market is definitely causing some companies to question their exit strategy, especially among the storage players. The dynamics are changing, and the truth is that there’s no “tried and true” way to exit. Our CFO actually just wrote a piece on this in CFO Magazine. It’s worth pointing out the role the management team plays in this process. Some people want to change the world. Others enjoy the process of building and growing a business. That will have a big influence on if and when a successful exit is planned.
I don't see a way to generate links within the comment editor, so I'll just post the link here: http://ww2.cfo.com/ipos/2016/07/avoid-these-mistakes-when-planning-an-exit-valuation/
All the best!
Rob
This is a table stakes feature in the storage business now. I am with Tegile - we have IntelliCare. Nimble has InfoSight, Pure has Pure1. I used to be with 3PAR, we had analytics like this upwards of 10 years ago (pre-cloud, granted). As commented before, NetApp has had this functionality for years too. I am confused why Tintri's extremely late to market offering warrants a story here. Or is that the story?
Oh . . . by the way Tintri product management folks - none of us charge for it.
Isn't it better to acknowledge that there will always be storage media that are optimized for performance/reliability and will always carry a cost premium just as there will always be capacity optimized media that will be relatively inexpensive? It is the job of the systems vendor and IT leader to strike the right balance for that particular environment and have the ability to change over time as requirements change? Twenty-some years ago we advanced from 3,600 to 5.400 to 7,200 to 10,000 to 15,000 RPM drives. Now were going through a similar phase with different grades of flash over time.
Hi yo_G - This is Rob Commins - the Tegile guy quoted in the article. You are absolutely correct - its the software, stupid! Our branch from ZFS has been performing incredibly well from a performance, functionality and reliability standpoint since we started shipping in 2011. I have been in the storage business since 1986, and have never worked with a company that has such a huge set of elated customers. Some may want to argue about code underneath the hood and whatnot; all I can say is the kit works and our customers on average spend over $2.80 on capacity/performance to existing systems and net new systems within a year for every dollar they spent on their initial deployment because they want to deploy us in a bigger way or deploy us in more parts of their business. They wouldn't be doing this if we lost data or fell on our face.
I hope that helps,
Rob
Nick -
Please don't try to differentiate Nimble based on having home brewed code versus Tegile's leveraging ZFS. Differentiate on customer value and stay out of the weeds. Here is where the battle is won. For example:
Data Reduction with Dedupliction AND Compression
Compression is great, but dedupe is where it is at in virtualized environments with redundant VMDKs and VDI images all over the place. Take 20-30% compression ratios (we all use very similar compression algorithms) and amplify them to 70-90% with dedupe! That takes the effective cache size and disk capacity far greater than compression alone. Our users get up to 200,000 IOPS at well below a dollar a GB from a single system.
Unified Access
Users want freedom of choice and flexibility. Why have iSCSI only when you can have FC/iSCSI/NFS/CIFS all on the same platform with no gateway? Database pointing to unstructured data? Sure thing. VDI desktop and data - you bet. No gateway required.
Active/Active Controllers
Funny - we use the same SuperMicro chassis, but Tegile users can leverage both controllers to maximize performance to all applications. That means twice the horsepower all the time. What's that? You're a service provider or an enterprise that uses chargebacks with really tight SLAs? Thats OK - you can put a Tegile array in active/passive mode if you prefer.
Oh, and please don't compare Tegile to Nexenta and Greenbytes (you may as well throw Oracle in there too then); we've focused our developer's attention on improving and optimizing that ZFS platform you poke at. ZFS gives our developers the opportunity to focus on real differentiators as opposed to working on table stakes functionality from scratch, which takes 5-7 years to truly iron out. Trust me - the last two storage firms I was at did it from scratch - it takes a lonnnnng time to get it right.