Actually, this is only 9 projects. If you were to go back, this number would be much bigger. It doesn't include, for example, the billions wasted on the NHS email IT system.....
55 posts • joined 7 Oct 2014
Buying a second-hand hard drive on eBay? You've got a 'one in two' chance of finding personal info still on it
Demand but not margin
If you were looking at the market and saw that SSDs were in demand and had much better margin than HDD, which would you focus on? This just says that WD are focusing where their capital can be best applied. For them it clearly isn't in the HDD market. How long before manufacturing capability is sold off to the Chinese?
I hear the same message again. Ethernet will kill FC. Yep, just like FCoE killed FC. It didn't happen. Why? Because it isn't all about the technology. It's about support, operational processes, division of teams and separation of risk. In some scenarios, Ethernet will be more appropriate, but it's not going to be a wholesale bloodbath for FC, if at all.
Dell/EMC (or really EMC) has a number of issues:
1. Credibility - the "charge customers a fortune or refresh" cycle doesn't work any more and is even less relevant with flash.
2. Innovation - The company moved forward in innovation by acquisition. Dell/EMC has had a trail of missteps in products/companies they've purchased and they don't have the money these days.
3. Competition - the competition is simply better at marketing and selling than Dell/EMC are. Dell/EMC also has internal competition from their own asset, like VMware with Virtual SAN.
All of these things are going to be hard to change.
Clear as Mud
The "Spectrum" branding just doesn't work for me. I have to keep thinking of the old product names. Having two products says that IBM couldn't reconcile GPFS to smaller shops, so had to OEM another product for those requirements. Two totally separate products. They've just done the same for data protection. Honestly their portfolio is just too confusing.
The trouble with looking at raw media costs is that it doesn't give an idea of the overall cost of operation/ownership. While it makes sense to tier data as much as possible, the process of doing so is hard and can be expensive, especially when the profile of data changes over time. Sometimes it's easier just to leave the data in one place and take a hit on the extra cost.
The Golden Age of flying is over
The budget airlines already do London <-> Tel Aviv in standard single aisle planes and it's not fun because they are the same planes kitted out to do 90 minute journeys. The personal space is non-existent, with so much crammed under seats you can't stretch out properly.
I do wonder how much of this is the airlines trying to make a profit compared to the tax imposed by governments. Seems that every time the airlines get some wriggle room, the tax authorities steal that profit back with a higher APT.
Personally, I simply wouldn't fly long haul on a single aisle plane. I'd rather not go and save my money for a better quality flight.
Why buy into VMware now, when shares are at an all time high? It's the reverse of Gordon Brown selling off all the gold when prices were lowest. Also, if components like Pivotal are key to the future, why sell that off? Surely Dell should be selling off the business that is declining or has no margin.
As has previously been discussed, Dell bought into a business (EMC) that was at an inflection point. Tucci didn't know what to do next because the Federation idea didn't really work. Rather than give someone else a go, he sold the company. Buying EMC allowed Dell (mr) the ability to claim a vanity project of the biggest private infrastructure vendor and biggest IT acquisition. But really, with so much public cloud growth, where's the future for them?
The issue here is inertia, not gravity
Ultimately this problem is one of data mobility, caused by inertia. Moving large amounts of data around is still hard because of the speed of light. We can't instantly make data accessible in multiple places without some kinds of trade-off. Many trade-off's exist - have multiple copies if you don't care about consistency, for example.
Current thinking (including the idea of data gravity) assumes that data and storage are the same thing. In fact, we can separate the two. Metadata tells us what we think we have. Storage lets us actually access it. If we work on metadata until we need to access the actual data, we can make it appear that our data has no inertia and exists everywhere. This is the startup opportunity described.
Re: shuffling the deck chairs on the Titanic
The Spectrum branding was just plain stupid. The names don't 100% reflect what the products were. Whereas XIV, SVC, TSM, etc were names customers were familiar with. I have to look every time to work out what the Spectrum XXX products are.
When you feel a need to rebrand a set of disparate products to make them look like a strategy, then you've lost the plot.
The Fibre Channel argument is a false one. Implementing NVMeoF is only possible on the latest releases of Fibre Channel (16/32 from memory), so customers will have to replace technology somewhere. I doubt end users are fully 16 or 32Gb/s today. So if they have to rip and replace, it may be the perfect time to look at harmonising networks and using Ethernet.
What's more likely is FC gets retained because the storage teams don't want the network team messing about with network topologies they don't understand. Plus, isolation sometimes is no bad thing, if you have the scale to justify it,
Too little, too late
Take a look at the existing HDD technologies that come to the market, many take 10-15 years (or more) to be developed and mature. Does this new technology exist? How long will it take to bring to market and what market is it looking to serve?
Say it takes 5 years to bring to fruition, will it be delivered as a new drive vendor or something Seagate/WD would acquire?
With all these questions, let's think where the flash and Optane markets will be. Flash is outpacing HDD drive capacity increases. QLC will drop costs further. In 5 years, there will be even fewer reasons to use HDDs, so the ROI on the investment of this technology seems very small and un-investable.
Without a doubt the network providers throttle when roaming. It may not be the home provider doing it, but it could be the network on which you're roaming. I travel to the US regularly and either every US network is terrible everywhere, or Three/AT&T/T-Mobile are throttling.
The mobile networks need to wake up and realise that they are actually just ISPs these days. Voice/texts are irrelevant as customers mostly care about data. However we're stuck in a cycle of cheapest being perceived as better by our network providers rather than focusing on value and service quality. How long will it be before the likes of Apple bring out a native SIP-enabled iPhone and start eroding the need to be on one network or another? I know we can do a lot of that today.
Flash Reliability and availability
SSDs are just as reliable as disk. In fact recent SMR drives have started to be quoted with "Terabytes written" figures, making it obvious that manufacturers know that latest recording techniques are becoming less reliable.
The only problem for flash is the price. End users will suck up all the capacity available, so as more manufacturing capacity comes onstream, HDDs will gradually be replaced. Costs will come down. Just look at DRAM prices from the 1990s to now to see how this will play out.
What the HDD manufacturers should have been focused on was fixing some of the underlying problems in the technology; single interfaces for example for data access; single I/O streams; allowing drives to continue working with failed sectors. Most important, making drives that spin much slower that could be used for low power, high capacity storage.
Rising Development costs, commodity prices
One of the issues for the HDD industry is the cost of individual drives, now commodity, set against the cost of research. Bringing HAMR and other techniques to products is no doubt expensive, but unit cost of a drive is roughly it was 3/4/5 years ago. How can the HDD vendors survive if they can make no money from their products?
Work overran by at least 12 hours
The original maintenance slot was due to finish at 12pm UK on Sunday and overran by at least 13 hours (I received a text notification overnight, around 1:17am I think). The downtime highlights clear problems in the project plans for implementation and backout. A "no-go" should have been declared a lot earlier and even if it was, having a 12-hour backout plan doesn't seem good.
Banks and other companies are in an increasingly difficult position because of 24-hour working. Standard "batch" processes the banks relied on (and still do) for many years, just don't work. These sorts of companies need people with bigger vision and better ideas.
Why contract these days?
When you look at stories like this (which may or may not be indicative of the wider industry), I wonder why anyone contracts these days. Add in the UK governments obsession with "equalising" tax payments for those who have permanent jobs and those who find they may not get paid for 2 weeks next month and it isn't surprising that we keep claiming there is a skills shortage.
Who would want to work under these conditions?
Such a true reflection of a sad world
Simon, so true. You hit the nail on the head here when you highlight how Facebook has become intrinsic to many organisations' businesses. Think of the schools, colleges, pubs, shops, etc that direct you to their Facebook page for information. That easy solution to not building your own website has ensnared these organisations into something they can almost certainly not get out of without serious financial investment.
Facebook has transformed from an interesting social experiment to something not far off a dystopian nightmare. What right-minded company continues to claim they will "police" videos of hangings, murders and suicides without having a moderate before publish policy? one that is interested purely in profits, I think we'll find, and not the human impact of their greed.
Academically but probably not practically interesting
These studies always have an interesting academic angle (although in this case the results seem a little obvious), however the details never take into consideration the bigger picture.
I'm unlikely, for example to be able to fully max out multiple bare-metal Linux servers to the same efficient degree I can running containers. Dedicating hardware resources creates fragmentation that results in the loss of any savings. Just think of how workload demand curves change over the course of a single day for any given application.
Then there are questions on poorly written code, or even more efficient code that works better as a microservice than monolithic application. What about patching and maintenance? What about backup (backing up a stateful bare metal server may consume more resources than never backing up a stateless Docker host).
As I said, interesting academically with little practical use.
Chad's blog talks about Unity as a project or codebase, rather than product name. We all know that products have internal project names that don't reflect the final branding. As a previous commenter said, EMC screwed up, plain and simple. If I were the Nexan board, I'd negotiate with Dell/EMC for a payoff and agree to release the next version of their product under a slightly different name. The cash will be helpful in sustaining their business.
One other thought; EMC has always used the idea of "independent blogs" to somehow imply that Chad & Co are writing independent opinions from outside the company. It's a tactic also used by VMware. In this instance the process has backfired. Dell/EMC should establish clear boundaries on what can be put onto a personal blog and what should come from corporate. Eliminate the ambuguity for customers (and now the lawyers).
There's a typical culture here of knowing how to do things best without accepting that IT is a 50+ year old profession, with many professionals who know exactly how to implement and test a backup regime. Startups simply don't want to engage on the important things because they think they know it all. Imagine being a builder and not learning about architecture. Imagine being a chef and opening a restaurant with no training or skills.
There are hundreds of IT professionals out there who are laughing at the naivety of a company like this. Take a step back and get some advice from those with experience. It will cost less than you think and may ensure your business survives into the future.
Don't forget the processing effort
Don't forget in these calculations that the CPU doesn't spend 100% of its time reading and writing data from external storage. In fact, with a decent amount of DRAM and clever data mapping, the processor might not read/write that often, depending on the application.
Also, we have to bear in mind that when the processor does 'do work", it may be at the request of an external call (e.g. a website) or some other user interaction that takes time over the network.
All this means the delay from storage I/O might not be that impactful, if we have enough other work to do. Hurrah for large memory caches, multi-tasking and parallel programming!!
Chris, It's worth remembering that as the Dell and EMC organisations come together there will be a lot of R&D overlap too. Hardware may/will be simplified if all moved to Dell tech, but the combined company would have R&D lines for each of the storage products, each potentially running separately. If Dell could merge these and still do development work on each platform, perhaps they could support customer sales. However it seems from the outside to be more practical to rationalise platforms and move customers over to a smaller common set of hardware (where there is direct or almost 100% overlap, like VNX/Unity & PS) and save on the R&D costs.
Running a supermarket operation is great if you're not the manufacturer too; Walmart, Tesco and others don't care about the production costs, they just buy in and sell out at a profit. I suspect if the supermarkets were also producers, then they would have a different view.
Re: Nice, don't forget impact of DAS
Yaron, I agree there is now an explosion of platforms for storage deployment. This is clearly driving the "storage systems" purchases. I guess we could measure these through looking at the (licensing) revenue of the companies in each sector (where available). It could make interesting reading.
Re: servers sold with storage
Nate, there must be a cut off, however I guess historically, external arrays and servers shipped with more than a boot disk have been easy to measure. Same thing with analysts using the "all-flash" category, which requires systems cannot be upgraded with disk, because it makes the measuring easier.
Re: Measuring the technology cycle in action
Steve, I agree with you entirely here and that's the point I was getting to, amongst others. We've diversified from an industry that is pure external storage (arrays based on either FC, iSCSI, NFS etc) to a scenario where storage sold could be from the server downwards. How will VMware Virtual SAN sales be accounted for? How will software sales be measured? How will new products that have direct server attach be classified?
In the traditional business measure, some vendors are declining, some are doing well - is this actually accurate to their overall sales? We need new measures.
I agree, when did this happen? I see this being quoted more often and the numbers aren't as great as some flash devices. 550TB/year is only about 0.2 DWPD for the 8TB model. I wonder if the write limit came in with helium filled drives as a way to fix the warranty as those drives eventually leak out their helium and seize up.
Nate, I don't think FlashSystem can do anything unique. As you say, sold by IBM in bundles with other solutions they sell. It will be interesting to see the announcement coming up in the next few weeks, as it will show us the level of commitment to the platform. The IBM storage business in general is heading down the tubes.
Re: Perhaps the worst article comment ever?
There is simply no way NetApp could and would have rewritten Data ONTAP entirely. The risk for introducing 100% new code into stuff that has already been battle tested is so high, to not even think about. In addition, if 90% of the features are the same between versions, what benefit would a rewrite get?
ONTAP 8 is not "fresh"; it's an evolved operating system with issues and benefits just like anything else that's been around for 20+ years. It still works in the same way, with legacy scale out (node pairs) and lack of scalability for FC over NAS. The issue is that is was never designed for flash and has features retro-fitted into it to make flash work. Remember at one stage NetApp claimed flash was only needed for caching and not as a tier.
SolidFire's SF series takes an approach that ONTAP can never do - proper scale-out (not node pairs) that can actively serve all data, not just act as a failover target. This fits the service provider niche that NetApp would love to be in, and have started to push with features like Cloud ONTAP.
I actually think NetApp should be applauded for taking a step into introduce new decent technology to the portfolio (not like the half-based EF series). The challenge will be whether the ONTAP bigots within the company can live with another platform or try to kill it off like that usually do. I have my fingers crossed for the former rather than the latter approach.
Kinetic drives are clever, but....
There are a few points to consider here.
1. You still need a map/metadata to track the blocks that are in or out of use on a device. Nothing changes over traditional storage.
2. If the device (an HDD) makes decisions on where to locate objects, then you have no control over performance. Techniques like prefetching become unusable because you can't take advantage of head proximity or the position of data on a track.
3. flash makes Kinetic drives obsolete and pointless as any 4K block can be stored/retrieved with (typically) equal performance (barring device garbage collection).
Kinetic-style drives will only be useful when the drive itself has more of a degree of autonomy i.e. when the drive can replicate its own data to another drive without involving the host. I think that's a while away.
The key message here is not the forklift upgrade. It's that a refresh of old hardware running 7-mode requires a data migration exercise that could be as easily achieved moving to a competitor as moving to cDot. So if you have to take the pain regardless, this is a point where customers will have nothing to lose by looking at the competition. It's the perfect time to move away from NetApp. None of this conversation is about re-using old hardware with cDot.
Where are the new products?
What is NetApp doing to evolve its business? Except for E-series and AltaVault (a tiny add-on solution) there's nothing new to see here. It's a continued focus on Data ONTAP, which still consists of two platforms, despite the deception to make it look like one (that's why customers aren't moving). By new products I mean things outside of traditional storage appliances. NetApp isn't changing and until they do, the company will continue in terminal decline.
Enterprises (the big ones) continue to buy from the likes of EMC/NetApp/HDS because of the support model. They know that if they spend enough annually, any P1 problem will be dealt with instantly (I know, I've experienced many). This level of support was needed when applications were monolithic "pets". As we gradually move to the web-scale era, traditional apps will be re-written and their storage will go with them to cheaper solutions, because the storage doesn't have to be 100% resilient. Of course not all apps will go this way, so some Tier 1 enterprise storage arrays will still be needed, but they will be the minority. The traditional technology is not going away (yet), but is in a period of attrition.
"Marketing is marketing and guys buy shampoo and fragrances, after-shave and skin grooming stuff now, as well as buying storage arrays."
I would suggest that many "guys" get their wives to buy their shampoo etc. Are we suggesting that partners are going to buy their arrays too?
What about the "gals" - surely they buy arrays as well?
:-) (tongue firmly in cheek)
I imagine the import routine puts the direct debit/standing order transactions into a single monolithic database that is latency/performance dependent on the server it runs on and/or the storage it uses. No doubt RBS have upped the server spec over the years, put the database on faster storage, but baulked at actually rewriting or modifiying the app to use a distributed database. It's a scale-up problem.
Investment from RBS probably means "we'll buy more hardware" not invest in staff who know how to write the application for the modern world.
Figures Don't Add up
The numbers don't appear to make sense; the SSD Array market (presumably this means all-flash market) rose only 1%, yet EMC went from $74m to $444m, with plenty of other vendors increasing share? Am I misreading something?
Again, excluding vendors that can ship traditional disk in an AFA makes no sense as the likely wider market is hybrid. It would be better to track flash as a percentage of the array capacity as an indicator of the success of the flash market.
Those who can, do, those who can't, write about it...
It's just another example of people writing about technology without having the actual experience of doing it. Their view is tainted by lack of knowledge.
Various languages have been successful at the time due to more than just the elegance of the programming model. Most were massively constrained by the hardware they ran on. It's disingenuous to claim COBOL, FORTRAN or even BASIC is bad (especially for the use of GOTO) when Java can suck more memory on a single VM than was ever available to 1000 or more machines running any of the above languages.
Java is an answer for laziness in programming brought about by the idea that O/S functional abstraction and cross platform code is better than writing for the O/S itself.
Re: Same architecture as others... what's different?
It's pretty simple; Flash has issues like finite writes & garbage collection that affect product lifetime & performance respectively. The better storage solutions (not SANs, because a SAN is a network) optimise the process of writing and reading from flash for performance and longevity. That's the main differentiating factor between Pure, 3PAR, Kaminario etc. Features (data efficiency, snapshots, thin provisioning, replication etc) are moving to be (to use an Americanism) table stakes. Everyone needs to have them to compete in the first place.
Efficient flash management means more predictable I/O, better product lifetime and so a more competitive price point.
Now you may think performance is a big deal but coming from a place of 5-20ms response time to < 1ms means applications will (initially) see such a performance benefit that most solutions will do the job. In time, performance will be an issue again, but it's not a big one now.
I have to say I'm struggling with what FlashRay was meant to be. The specs issued show the hardware as a "6U controller" and 2U disk shelf. What the hell is in 6U as a single controller? Now, it would be more understandable if the 6U represented dual controllers in a single node (with integrated disk), but that's not how it has been described.
From the tone of this article, it appears FlashRay may have always been a development exercise to create new software features to integrate into the existing platforms. If that's what happens, it represents a major mistake from NetApp. Continuing to develop a 20+ year old architecture is chasing an impossible dream.
Re: Why bother?
For most (if not all) enterprises, all-flash is overkill today and most people are not using flash for all of their data.
However, processor & DRAM speeds continue to increase so the gap between central processing and external storage continues to widen, as HDDs are not increasing their performance at all, and in fact are starting to slow down. The gap has to be filled by something; that something is flash. So although today we don't need all-flash, in 5-10 years we will need all-flash, complemented by even faster memory in the server.
I think Bennett is wrong to assume that we don't need persistent storage in the server (again), instead it's going to be about how it is implemented as applications evolve. Expect HDDs to eventually be used purely for archive and nothing else.
Re: Nice and slow!
Dimitris is correct, the issue is always the quality of that I/O. Imagine what happens when a component fails - does the Hitachi/VMAX/NetApp device stop servicing I/O while disks rebalance? How about the open source solutions? Even 1 second outage at 1m IOPS means a disaster for your application. All of these discussions are irrelevant if you can't manage failure and recovery situations with minimal impact. Until these problems are resolved, none of the open source solutions will be suitable for true enterprise applications.