
"Reusing five-year-old network string is a flogging offence"
Would that be with the CAT-6 of nine tails?
Several things change when you decide to move from an in-house technology setup to a hybrid infrastructure. And if part of the move involves relocating services and applications from the on-premises installation into the cloud, one of those changes is that some equipment suddenly becomes underemployed. If you decided to make …
THIS ^^^
Put the old tat in the bin and use new. It takes up less space, offers more compute/IO for the smaller footprint, is more power efficient, requires less cooling, and more compatible with the latest o/s platforms.
The skip is the right place for Enterprise hardware > 4 years old.
For home use however, old laptops/netbooks can be given an extended lease of life with a new battery and Linux Mint.
The big problem with the re-purposing of old kit to squeeze more life out of it is software licensing costs. From everything that I am seeing out there, the idea of utilizing the end of life kit is great until you start loading up legacy applications onto it, when you then start hitting the per core or per socket licensing models.
The idea of re-using old kit is great and allows the lifecycle for replacement to be extended particularly around servers, changing a lift and replace scheme into a continuous and less disruptive cycle.
To me the ideal usage case for this old kit is, as Dave points out tape replacement, on-premise cloud for new generation apps (avoiding traditional license schemas) and software defined object storage / data repository allowing companies to start using legacy data for analytics.
Licensing is a far bigger consideration for most companies than footprint or environmentals.
"You could even decide that now you have several terabytes of slow storage, it's time to throw away the old tape backup system and move to disk-to-disk."
Yeah, switching your backups to a bunch of ageing disks in an array approaching EOL , with no offsite/duplicate/archive option, is definitely the best idea.
Just ask KCL.
I can see that you don't know an awful lot about Object Stores and how they provide global protection for component failure, unlike Tape and Tape Robots. I would not promote the use of legacy disk arrays as they do have a finite life, however, servers are a totally different proposition and the servers in conjunction with JBOD with a bit of intelligent Software Defined Storage makes a lot of sense, until they invent the software defined Tape Library of course.
With Object Storage you are moving to an environment that enables Metadata searches and very fast retrieval of data to higher performance storage and the ability to push ultra-cold data out to public cloud to keep cost down as low as possible, a bit quicker than sending it off to Iron Mountain?
"Older kit generally means cheap, commoditised RAM and perhaps even inexpensive expansion CPUs"
Bollocks - DDR2 and DDR3 kit is more expensive than DDR4 (DDR2 is ruinously so)
Older systems generally draw twice to three times as much power as replacement kit _for the same job_ (although the power draw tends to only be slightly lower, the new kit is usually far more capable and has far more memory)
Apart from HDDs, other mechanical parts become problematic - fans in particular - and whilst I've had the luxury of being able to replace the bearings in centrifugual fans in the PSUs of Xyratex F5404 disk arrays many fans are completely non-serviceable and often bloody expensive to second-source - Most of what's on Ebay as "refurbs" are actually "pulled from a working box, but condition unknown and made be just as bad as what you already have onhand".
RAID batteries are another major issue - and depending on the type can be _impossible_ to replace (EG: Xyratex raid controllers)
Finally - and quite dismally - Capacitor Plague is STILL with us. I've just had a pair of 3 year old Intel Desktop Board Based systems decide to die thanks to bulging Nippon Chemi-Con KY series(*) caps around the CPU(**) and it looks like a bunch of other 3-5 year old server-class systems are affected too.
Whilst this pair were in a server room temperature excursion (the thermal trips went off when the room hit 36C and the A/C went "off" thanks to some twonk wiring the system to the fire panel without bothering to inform us(***) and then deciding to leave a false alarm sounding all night.(****), there are 40 more systems of the same type in offices I now need to check for the same fault and probably need to write off 5 years early(*****)
(*) This is supposedly a reliable brand, but I've discovered the same brand and series caps failed in several Sunpower-manufactured Overland tape robot PSUs. It appears Chemi-Con "has serious QA problems".
(**) Not bulging much, but the obvious thing was that they'd pushed their bung out and were sitting at odd angles to the board - one in a HP-branded Sunpower PSU had completely blown the case off the bung and blown its guts all over the board.
(***) There's no need to have sealed-room recirculating AC systems shut off when a fire alarm goes off unless there's an actual honest-to-god fire IN THAT ROOM - and in this case the alarm wasn't even in the same building. The AC systems are setup with 3-way redundancy, Having someone deliberately and simultaneously shut down power to ALL AC systems with killing equipment power wasn't on the radar.
(****) When an alarm was confirmed as a falsie, the building services people refused to give the fire brigade and security callout staff the codes to disable the faulty sector (which was in another building on the site). leaving the alarm sounding all night.
(*****) The outfit tends to run desktop systems into the ground as that's the way budgets are allocated. The worst part is that these systems appeared to be working after the event, with "odd" results occurring on some operations depending on the load the CPU presented to the now-hopelessly compromised power supply rails. Telling people they're going to have to find money to replace systems early and if they don't "we can't guarantee the accuracy of their computing activities" isn't going to go down at all well (Gee, thanks Intel). Having to pull and visually/electrically inspect everything is a massive time sink with labour costs vastly in excess of the value of the equipment, but replacing "on spec" is something that just isn't going to happen.