Re: Or, and here's a counter-opinion...
Yeah all of that is typical no matter the size of the business you're supporting.
I used to look after an auction house that had offices and galleries in 8 or 9 countries at a given time with around 1,000 staff members who would constantly be moving around across the world between sites. The infrastructure for them was crazy complicated and on top of the fixed infrastructure we had to be able to deploy a new temporary auction house with 48 hours notice including cabling, broadband installations etc.
The IT team was 3 people. One in the US, me in Europe and another chap in Asia. We were a great team and we never had to work insane hours because we kept things under control. We used the timezone differences to our advantage. I.e. any critical updates etc that needed to be done could be done out of hours by another chap in a different time zone.
Occasionally we'd have to work 3 or 4 days straight (literally, no sleep, or quick naps on the server room couch) if there was a major auction on, but we'd get the following week off no questions asked. We'd also get paid handsomely for the additional time and effort required. For an auction we'd get an extra £2,000 a day because it was understood that putting in the effort to support an auction round the clock was insanely difficult and potentially damaging to our health.
90% of the time it was less than 9 to 5 (because we had timezone overlaps), and usually only 4 days a week. Our team was left to be pretty much autonomous and we'd cover for each other to reduce our time commitments (we were acutely aware of the possibility of burn out), which the business was perfectly happy with as long as we kept things running smoothly and we waded in full tilt if there was a situation that required it...which would happen from time to time...dodgy updates, failed hardware etc.
Basically, we made sure that when we were required for some sort of emergency or major problem we were as fresh as possible...that's half hour job as a techie...if you're constantly burning the candle at both ends just doing regular maintenance, you're going to be pretty worthless during a serious issue and the recovery time for the business is going to suffer.
As a techie you're not paid to be busy, you're paid to be available when the shit hits the fan to ensure the time to recovery is as low as possible. You're not a factory worker, you're a trained specialist. Your value is in your expertise and ability to respond not your time commitment. If you believe your salary is specifically calculated based on the hours you put in, you should stacking shelves not working in tech.
Put it this way, how can the business expect you to dash into the office at 2am if there is an emergency if they grind you into the ground for 10 hours a day? If I had to work a shift that long, would I get up at 2am? Hell fucking no.
Would I do it if I was permitted to set my own hours and do everything in my power to reduce my time commitment? Hell yes.
Would I cover for a techie colleague that needs a day off to ensure my team is always fresh? Damn right I would.
If you're finding yourself working daft shifts you need to be speaking to your manager and explain how your time commitment might effect your ability to respond in an emergency. If you're overworking and burnt out you have no obligation to run to the office if the shit is hitting the fan. They can't fire you if you've been doing unpaid overtime and you refuse to respond. That's grounds for constructive dismissal.
Moreover, if your entire team is overworked with no slack, you need to re-assess your processes and work with your team to organise a less crazy setup. If you're all working the same shift at the same time for the same hours, you're all idiots. Spread yourselves out, overlap your shifts. If you aren't all actively working towards reducing your time commitments by supporting each other your team is toxic AF or you have a dickhead managing you that doesn't know what they're doing.
If you're having to manually review every update before you manually update every endpoint you're wasting time. Updates are 99% of the time not a technical task. What we used to do was appoint someone in a given department to update themselves first before telling everyone else to update. That way we weren't wasting time on proxy desktop updates.
For example, we had a design department responsible for the auction catalogues. Adobe heavy. The team leader would run updates on his machine first, if there were problems, we'd be informed and we would respond. If there were no issues, he'd instruct his team to run the updates.
Periodically, we'd check in on random teams at random times to ensure updates had been run etc.
Security updates and critical updates were always applied automatically via our central update servers. I can't remember the platform we used, it was 12 years ago now...but it would look after Windows Updates and various other product updates.
We always kept updated and pre-built images for various departments machines to allow us to swap a disk out if there was a major issue to get them back up and running straight away without having to hang around their desks etc.
We'd take the disk with the broken install away and re-image it, check the SMART status and add it back to the pool...in some instances we'd replace the entire device if we could.
As for switches, routers etc...as mentioned earlier, down out of hours by someone in a different timezone. For the configs we had crib sheets that you could tweak in excel that would spit out the configs should something go wrong. In the rare event that something went horrendously wrong or the remote person got locked out etc we'd just nip into the office. No big deal if you're not overworked....then just get into work the next day an hour or two late if it was a late night or early morning call out after dropping the third guy a note to let him know that calls might be diverted to him for an hour.
It's probably worth noting that we didn't have a manager or team leader etc etc all three of us reported directly to the CEO or COO. There was no CTO, CISO etc etc. it just wasn't necessary.
We also did away with the ticketing system as we worked out that benefits of the time saving outweighed the benefits of a historical log of issues...instead we kept a wiki that we'd add any processes to that we felt were worth knowing about for future issues. We also built a custom platform to allow us to keep tabs on each others whereabouts...basically, we could check the system to see which building we were in and with which member of staff etc and how long we were there for and whether we were remote or off work etc.
To reduce the desktop support burden we'd do user training on a semi-regular basis (twice a month roughly) which we would also use to gather feedback on matters relating to desktop requirements, what's working, what isn't etc. The staff actually liked this and would usually take advantage of the hour or so alloted to discuss any major issues which would then become a focus if necessary.
Basically, as a team our primary focus was the infrastructure and keeping that going (stability, security and so on), secondary was ensuring that we constantly reviewed our processes to keep them as streamlined as possible, tertiary was desktop support. We found that if we kept the users up to date with training, we'd reduce the desktop support burden significantly.
There's an enormous amount can be done to streamline tech work, but a lot of tech departments are too rigid or set in their ways to change, you can usually pin point one or two team members that work hard to keep the status quo. Usually they are senior.