I can only imagine that the guy whose job I now have (after he didn't take backups, or care about THE - yes, the - server RAID failing) used to spend his days running around looking busy.
Because there was no computer imaging. Every machine was manually built and/or cloned from the one next to it. It caused no end of problems (not least that the clones were hundreds of Gbs because of cache local profiles, etc. but also things like one bit of software slowly propogating around the entire network by this method, not to mention that they didn't even take them off the domain in between or sysprep or anything!). Every patch cable was cut to length, and manually crimped. Every piece of software was installed from scratch if it went wrong. Every PC got its own Windows Updates when it felt like it. And so on. The guy wasn't "managing a network", he was just running around 100 almost-identical PC's, effectively.
One of the first things I did was put in a proper imaging system. Boot up, press F12 to PXE boot, select the image. Wait ten minutes and you had a working, domain-joined, licensed, clean computer with all the software you need. The number of tickets dropped like a stone (and the first thing I did was put in a proper helpdesk system and forced the users to use it for everything!). Even today - 2 years later - if something goes wrong, it's often quicker to pull out the hard drive, image onto a blank one, and then work out what went wrong with whatever's on the old disk. But because of the cleanliness, because ONE image has had 2 years x 100 machines x 500 users worth of testing, the problems that arise are so much less.
Then I rewired his cabinets with proper patch cables. So much less hassle with disconnections, instantly.
Then I rejigged all the networking so that it was resilient anyway, and enabled RSTP (I know!). No more random disconnections and network loops.
Then I made my bosses run a proper power supply to the server cabinets. No more power cuts because someone plugged in a heater over Christmas.
And this month, my boss asked me if we intend to replace everything like we have in previous years. Er... why? It's working. "But things fail, etc.". Yes. And we're resilient now. So when they do, we'll replace them as necessary and users will not notice in the meantime. "But client upgrades!" Are the clients slow? Are users complaining (more than normal when they try to download 100Gb of files and it doesn't happen in a fraction of a second)? No. So why bother?
All I take away from the article or my own experiences is: Do things properly and all the problems go quiet. Be as wary of a full helpdesk as of one that has nothing on it at all. In 150 machines, over two years, I've had about ten drive failures. Apart from that, almost nothing goes wrong with a bog standard "office" system that you can't compensate for (and, hell, I could put SSDs or RAID into the clients if I really wanted to!). So long as you manage it properly and use the proper tools.
And I don't even do cutting-edge junk. But the second I find myself doing a job twice, that needs to be done properly, or takes a long time, I find a way to automate it. And then my apprentice knows he can just press F12 and - without the potential of losing any client data - get a working system back up in minutes.
This is hardly rocket-science. If your company isn't doing it, they are basically creating work for themselves and you might want to ask why. Hell, even my previous workplace had the same, and I've done similar using Norton Ghost (as was) from a network boot.
And, with or without such a system, you shouldn't be having the same causes / kinds of failures for ever and ever, repeatedly. It means you're not fixing the problem.