I say it's plausible
I still blame them for not testing before the blackouts happened.
It's one thing to not upset your network for no good reason, but knowing that it's going to happen and not making sure the failover works is not acceptable.
"Expect the unexpected" is a cliché regularly trotted out during disaster planning. But how far should those plans go? Welcome to an episode of Who, Me? where a reader finds an entirely new failure mode. Today's tale comes from "Brian" (not his name) and is set during a period when the US state of California was facing rolling …
Tangent: I've always wondered why cable-type elevators (as opposed to hydraulic or geared or whatever other mechanisms might be used in practice) even have emergency breaks.
It seems like it'd be easy to construct a car that's sufficiently unbalanced that regardless of the contents, it would still wedge itself in the shaft if there's no tension on the cable. Presto: cable failure, car just stops where it is, or at worst drops only a fraction of its own length.
For extra safety, run a toothed post the length of the shaft, to catch a bottom edge of the car if it's not held plumb by the tensioned cable.
Brakes just seem like the harder, more complicated way to solve the problem, when simple geometry can do the job. Presumably I'm missing something.
Any elevator experts around?
Physically damaged by what process? The threat model here is the lifting mechanism fails, not the Incredible Hulk gets impatient and tries to punch his way out.
And what would cause the teeth to shear off? The car deviates from plumb as soon as the cable is no longer tensioned. The car doesn't have an opportunity to acquire any more momentum than what it has at maximum load and normal-descent speed, and it's trivial to design for that (plus margin).
Anyway, forget I asked. The responses are clearly not relevant.
Wear and tear. Elevator cars tend to run for decades. They'll probably get bumped, jostled, dinged. As for the teeth, think Murphy. It may not tilt off plump right away, plus the teeth or the corner could have flaws (the corner could be dented so it doesn't catch clean or the tooth has a metallurgic flaw that creates a weak point that allows it to snap).
Here's the innovation (of 1852) that Otis is touting.
If your "unbalanced" elevator car is designed to be lifted straight up or dropped straight down on its cable all day, what stops it from dropping straight down when the cable is parted from it? Air resistance aside, everything falls down at the same acceleration.
Maybe the part I'm missing is where your elevator is the go-sideways kind, or what I want to call a very steep railway. Apparently the correct term is "inclined elevator". I'll let you look up what's special about a "funicular" if you want to.
Personally, I say having the floor of the elevator be level is worth the risk. What if everyone gets coffee in the morning and then you board the inclined elevator with it, all leaning sideways. The spillage!
Apparently you misunderstand the proposal. The car is unbalanced. It's not tilted (off plumb) in normal operation because of the tension on the cable. Release the tension, then it tilts.
This is trivial to model, so I'm not sure why there appears to be so much confusion. Forget it.
If the force that holds the elevator up is removed, what force causes it and the people in it to do anything but proceed straight downwards? Not the weight of your feet, for one.
https://en.wikipedia.org/wiki/List_of_elevator_accidents mostly refers to mining and construction projects. Possibly they are not equipped with the Otis safety device, which has certain specific requirements. The list also is in order of decreasing interest / death toll, so the top cases tend to be big people carriers like in "Metropolis".
"If the force that holds the elevator up is removed, what force causes it and the people in it to do anything but proceed straight downwards?"
The imbalanced weight distribution of the car coupled with the air resistance of the shaft (which is magnified if the shaft is closed, as the air has less ability to move around the car). There's a reason many objects, especially those with a low weight-to-surface-area ratio tend to tumble while free-falling.
So... your lift car is shaped like an aeroplane wing. As it plummets, the, um, aero force steers it to slam into the side of the shaft, and stop, hard.
If I recall right, modern racing cars use similar design to press the car against the ground while it is driven. Air pressure.
Racing cars are not slowed down notably by this feature.
Edit: by "weight", do you mean mass? Are you considering the difference?
There wouldn't be patents on perpetual motion machines if people didn't deceive themselves in physics.
Doing that would require some additional length on the car to allow for airfoils needed to create the aerodynamic forces you describe to cause the car to maneuver reliably in freefall. That's how airplanes lift themselves and, like you said, racecars produce the downforce needed to keep their tires on the track (though, to the latter, it should be noted that it does slow the car down when you do that, as the downforce increases the friction of the tires against the road; the tradeoff between speed and downforce is just one of many variables modern race crews carefully tune during a race).
I am aware of the difference between mass and weight. However, as this scenario specifically invokes gravity, the force that applies to the mass, weight is very much in play, as does air resistance which produces a counterforce to the weight. Your airfoils proposal simply exploits the air resistance in specific ways.
"There wouldn't be patents on perpetual motion machines if people didn't deceive themselves in physics."
Another reason for backup power failures I know of is they test the generator by having it start and run while the system remains on mains power.
Yes, you know they don't actually cut the supply and verify that it switches all of the required over to backup, and that on switch over there is not such a surge it stalls the engine.
Oh and let us not forget the other deal-killer that follows from that sort of a "test" where not everything that is needed is powered by the backup power. For example the A/C (yes, really!) or a couple of network switches around the building where the related local UPS comes off a non-switched branch of the power system.
Back when I worked for Bigger Blue (late 1970s), on Fabian Way in Palo Alto, we'd kill the mains power at 3PM on the last Friday of every month to ensure that the battery would carry the load long enough for the genset to warm up enough to take over. It ain't exactly rocket science. (In the event of failure, everyone went home early with two hours pay. It never happened while I worked there).
Keep in mind that this system ran the entire campus, not just the computers. (OK, nearly the entire campus ... The fine folks in the Faraday Cage over on East Meadow Circle and across Adobe Creek on West Bayshore had their own solutions ...).
A certain software vendor in the vicinity used to do this too... it's one of those switchovers that went wrong spectacularly in that they discovered then that their 'independent power supplies' were independent between their property and the local substation, not independent to two different substations as expected/designed. That was apparently corrected eventually, but it cost them around 2 weeks' work thanks to some badly-designed interior engineering decisions that hadn't survived the unexpected shutdown.
Like everyone else says, if you haven't tested it, it's not fine.
@Charles 9 - Then that crew member has to find a plausible failure mode they can trigger that's not in the plan, resulting in it being investigated and added to the plan. Sounds like a win.
Bit less of a win if they find such a failure mode, and decide to save it for the in-laws next visit...
In reality, the only way to fully test the setup is to actually throw the big switch and turn off the power - while crossing your fingers and praying.
But often people are a bit reluctant to do that - can't imagine why. So as you say, they fire up the genny, run it with no load for a bit, then shut it down. Even if you want to do a manual changeover, many are installed in such a way that you can't do a no-break switch.
Result ? When you need the thing in anger, you find it can't support the load, you've missed essential bits that aren't genny supplied (such as bulk fuel tanks in the basement and the lift pumps aren't run from the genny), the starter battery is knackered but normally it's on a float charger so it can manage to turn the genny over, or the cooling system can't cope under load, or the favourite one ... the fuel has gone off in one of several ways.
If you add syncing and paralleling capability then you can start up the genny and take over the full load without a break (or even export some power) and thus fully load the genny and use up some fuel.
There are actually outfits that will add remote control to your genny (I assume it has to be at least a certain size before they're interested) so that they can pool your capacity and sell it to the grid as STOR (Short Term Operating Reserve) - capacity can can be called upon at very short notice to cover for grid scale "manure on the fan" situations such as a large station tripping out. As a side effect, it'll use up some of your diesel if called upon so you have less of a problem with old fuel.
I visited a switchroom where the control panel was covered in pigeon poop. It was deep doh-dah on the top, the front had solidified 'poop-falls'. The floor was crunchy, the air was acidic, the steel enclosure was rotting away. Truly disgusting.
Incredibly, some of the machinery was still operational. Any further fault investigation was going to have to wait for a deep, deep clean or probably shot-blast.
We were never called back. Entry was forbidden due to a bio-hazard.
Many years ago I worked with one person*, he parked his car in a barn\outbuilding at his parents home, which was full of seagulls, the poop accumulated on his car roof to the extent it looked like an albatross had shat on it & resembled a sharks fin on it's roof.
*Whom I describe as the physical embodiment of Mr Bean (Except on screen, you avoid the pleasure of his lack of personal hygiene).
At a previous employer whom shall remain unnamed lest they read this site, recognize me, & come gunning for my arse, one of my managers would occaisionally park under a certain tree to take advantage of the shade it provided on particularly hot Summer days.
One of my coworkers & I felt the need to express our displeasure over some of his more rage-inducing management habits, so we started looking about for creative, nonviolent, but extremely disgusting forms of potential mischief.
Cue the tree. And a jar of honey. And a bag of birdseed. And an evening/early morning spent climbing said tree, coating the top sides of the branches over his favorite spot in honey, sprinkling it in birdseed, & thus finishing a bird attracting lure to roost over where his car would hopefully be the next day.
At first nothing happened & no birds came to eat, but we kept refreshing the honey+birdseed lure every day (night/morning) for a week until they noticed.
Oh. My. Feathery. Gods! The birds swarmed the tree like it was straight out of Hitchcock's movie. But only on the branches with the lure, leaving it looking like God had pointed a finger & command "Thou Shall Not Perch Here" and thus it was so.
The amount of poop that entire flock generated, then deposited over his car, well let's just say it was mountainous.
Which, if you think about it, explained why he called a tow company to drag his car home rather than drive it himself... something about convertables with their tops down, a mini mountain of bird shit, & a Summer's day with temperatures in the 100DegF (37.78 °C), can you say "Baked on crunchy shell"? =-)p
His management style didn't change, but he also stopped driving a convertable to work.
I can *easily* envision the destruction unmonitored bird excriment could do to electronics/electrical components/etc. If what it did to that car interior is any indication... *Comical cat gagging noises*
RE: "Fundamental backup gear like generators should be inspected and tested monthly at least."
This is true, it should.
However, there is often a difference between what should happen and what does. You *should* have tested and verified backups, but I've still seen companies that he been badly stung due to the fact that they've had a major system failure, and the backups they thought were being done every night had stopped working months ago, and no one thought to check.
One of my old managers was an solutions architect, and dealt with a lot of service improvement type projects. Modernising, scaling up etc.
One of the first things he tended to do was set up a workshop with appropriate SMEs, managers etc for an application, and ask a few questions. He always thought it was better to talk to the people who dealt with the systems day-to-day, than read some dry documentation.
One of the questions, amongst many was always:
Him: "Does the system have a backup process?"
Him: "Ah, good, when was the last time the recovery process was tested?"
If it was longer than 6 months, or never (which did happen!):
Him: "So that's a 'No' to backups".
This post has been deleted by its author
This is what astonishes me most. The movie that gloriously enriched English culture with that sketch/joke (and many, many others) is now more than 45 years old and it's still as gloriously funny now as it was then.
I am eternally grateful for having grown up in that time, especially when an alternative group with Rowan Atkinson then also started to take the absolute mick out of them. Beautiful.
Apologies for the digression :).
I've said this before, but sometimes testing doesn't make any difference.
Full online UPS with genny backup. Every week the genny is tested. Every month a full cutover test is run. Every test works perfectly.
So, when the power actually failed, do you think the genny would start? Not a sniff despite the increasingly frantic efforts of facilities as the UPS Death Clock counted down. Say hello to a full DC outage.
Turned out that the genny has an exhaust pipe. Said pipe has a flap on the top, which is blown open by the exhaust and closed by gravity. The whole exhaust arrangement is outdoors and made of mild steel. During the last test, the flap had opened but, due to repeated testing burning off any finish and subsequent corrosion, it had stuck open. Rain had fallen, gone down the pipe and entered the engine via the exhaust manifold, preventing it starting.
That took a while to figure out so, courtesy of that time-honoured process "this must never happen again", before the post mortem was complete a second genny went in.
The next time the power went, both gennies started and the overload blew the power switchgear...
Very important site. Must have power available.
Two diesel generators for back-up to mains supply: auto-start/stop on demand, peak-lopping, self-synchronising, load-sharing, start on mains fail, back-synchronise to grid on mains restoration, back to auto-control.
Day before expiry of the 2-year warranty, customer's representative (council) asks: "How does this all work then?". Promoted from traffic-light maestro.
generator [ jen-uh-rey-ter ]
1. A device used to generate back-up power, that will work flawlessly through any testing, but will fail instantly when actually required.
(see also UPS, Spare cables (always the wrong one), and any equipment featuring the word Back-up in the title)
At a previous gig, our Facilities team decided to test the generator and automatic failover by cutting the mains power. So confident they were, that they did this while the systems were live, without informing the IT team.
Of course, a 'bubble in the fuel line' meant that the generator did not start; the UPS only had enough juice for 15 minutes; and they didn't immediately switch the mains back on as they didn't know the UPS was for such a short time.
Power failed; systems crashed; and many brave disks lost their lives that day.
It was also discovered that the A/C was reliant on mains power only, contradictory to the IT department's specified requirements. Since the datacentre had recently been moved to an underground bunker with no natural ventilation, this was also a problem...
Oh, definitely plausible.
I'm sure I've told this one before, but it bears repeating. One morning, some time in the mid-1990s, I arrived at a major US DoD computing center to do some software installation and configuration. I found the staff lounging around in the break room, lit only by daylight coming in through the windows. No HVAC either; fortunately it was a fairly cool day.
The facilities people had chosen that day to test their backup generators. They cut mains power and the generators refused to start — eventually the word was that the fuel was contaminated. And then they couldn't get mains power back on either. Electricians had been called but weren't on site yet, and of course bringing power up to a major campus generally isn't a matter of just flicking a switch.
Power came back sometime in the afternoon, and we were able to get through what I had scheduled before heading out for a moderately late dinner. But most of the day was spent loafing about and swapping old IT war stories.
So be prepared for that failover test to fail, and try to avoid doing it when you have people visiting from offsite, I guess.
In the early sixties, when the grid was not as robust as it nowadays is in central Europe, a mishap of this sort happened, only on a much larger scale. It was a very, very cold winter and every
inch ounce drop SI unit of electricity was needed. Those times it was not uncommon for factories, streetcar operators and the like to run their own insular electrical systems which could be connected to the mains grid when necessary. All coordination was done by telephone. It was in the steel works where my grandfather worked, that the operations received a phonecall pleading for their "phase in" - the steel works power plant first needed to be phase synchronized with the grid, which most of the time it was not, for it to be connected as an emergency source of electricity. This phase-in was to be the first one after some refurbishment works on the main transformer. Then the phase-in would be done using a bulb connected between the steel works mains and the grid mains. As soon as the bulb went completely cold and dark, the phases matched. Only then this would be done with one bulb only and the technicians usually saved time and effort by not checking other phases. There was no way the other two phases would be out of synch when one phase was synchronized. Or so they thought - and you know very well where this is going. The bulb went dark, another phonecall dutifully came in, the senior technician pulled the lever, and the whole steel works together with the whole surrounding area went completely dark and silent, except for the glowing hot steel cooling slowly inside the production line and the atershocks of the generator trying to reverse-step its waltz and now dancing around the generator hall.
For the next phase-in, after the whole power generation station has been duly rebuilt, three new bulbs were procured, for the old one was pronounced unreliable by the management.
Normally manual syncing would be done with the three-lamp method.
One lamp (the white one IIRC from my uni days) is connected as you say - when in phase it goes out.
The other two (red IIRC) were connected cross-phase - so one lamp between L2 on the genny side and L3 on the supply side, the other one L3 to L2.
When in-sync, the two red lights will be equal brightness, the white one goes out - it's easier to judge when the two lamps are equal brightness than it is to reliably judge when the one lamp has gone out. On the experiment board in our machines lab, there was also a solenoid that popped out and physically prevented closing of the switch should any student decide to try the "so what if we ..." experiment !
These days it's all done by a small box of electronics.
Place where I worked as a draughtsman, the synchronizing was performed manually using three voltmeters, one per phase, between our output and the grid incomer. Only when all three meters were reading zero would the contactor be closed, and the lead operator (Sam) was so adept that he could go from engine start to synchronized in less than a minute. When the V12 diesel was running, a trip to the loo was like being on a ship at sea, the whole building used to shake because the engine was in a lean-to built against the wall where the bogs were located.
During the great north east blackout of 1965 I saw the three light bulbs at the college power plant blinking slowly in sync as the college generator phase differed from the power companies phase. Since the entire power grid was unstable the electric company wanted the college generator off-line so no effort was spent in synchronizing the generator.
One horrifying experience was watching the street lights pulse as various generators on the power company's grid slipped phase.
The computer center had a magnetic drum that had to be kept spinning or the temperature gradients across the drum would distort its shape resulting in a head crash. Every time the building lost power the drum would blow a fuse in one of the phases but kept spinning by running on a single phase. Investigation showed that the generator and the mains power were rotating in opposite directions.
I saw this in slightly more spectacular fashion on an exploration oil rig. This was a while back (think 80386 days), and we were installing a system which took inputs like anchor tension and wind and then managed the anchor winches to maintain rig position. A fellow provider was nearly ready setting up their system which was to provide us with the environmental data we needed.
He started up their system, and after the usual disk spinup sounds and boot sequence the relays of the A/D card clicked in - at which point the thing gave a Hollywood-worthy flashbang and started smoking. Whatever blew up must have had some serious power because the case was now bulging and this was in the days when men wore moustaches (yes, guilty) and cases were made from real steel.
Tracing back the wiring showed electricians made a mistake where one of the mA loops they were supposed to receive was actually wired for generator voltage sensing. In other words, it got hundreds of volts where it didn't expect it and responded accordingly.
In all my years in IT I have not seen any other system blow up like that, other than in movies.
Telco satellite ground link. Sudden power loss. Back EMF blew a couple largish transistors. Turned the steel caps inside out, and fired 'em through the (admittedly thin) steel case of the rack mounted equipment, and then through the acoustical tile ceiling. I was in the next room, I jumped about a foot and looked around in time to see a small trickle of dust falling from the hanging ceiling. Went next door and discovered three or four satellite techs with dazed looks on their faces. Nobody hurt, thankfully. The next day I remembered the dust falling, and retrieved one of the caps from above my office. Seems it bounced off the underside of the roof and still had enough force to shift the tile when it landed. I carried it in my pocket for a couple years as a curio, and still have it in my trophy case.
Re missing parts after an eruption... Working at a medium-energy particle physics facility in CH (so not that one...), we were filling a superconducting solenoid with liquid helium, using a newly-built transfer line.
Such lines consist of a small stainless-steel capillary enclosed in a larger casing, which is evacuated of all air so that the capillary won't be heated by gaseous convection -- a vacuum (or Dewar) flask in essence. This particular transfer line, custom made for the unique relationship between the LHe container and the location of the magnet, had a right-angle bend, where the outer section was made of two intersecting tubes, and the larger tube (the horizontal one) was sealed off by a disk (everything was stainless. I forget if the disk was welded or brazed into position).
Midway through the fill process, there was a sudden ear-splitting noise and it quickly became evident that the supposedly sealed disk had gone AWOL. Reconstruction via thought experiment was that there was a leak somewhere in the LHe capillary and the supposed vacuum started to fill with liquid. Which inevitably turned to gas. Which considerably increased the pressure inside the transfer line. Which inevitably led to a failure of the endcap disk. Where it ended up, I never knew, just a small fresh gouge in the concrete radiation shield block opposite.
By the grace of $DEITY, even though we moved around the area whilst filling, both my colleague and I were on opposite sides of the trajectory when the disk went feral. Bit of a hard one to write up in an incident report...
A friend of mine used to work for a large electrical manufacturer of radio and radar equipment. He was working on the horizontal stabilization of the radar platform for the Navy, and had a system breadboarded on the bench in his lab. It was supplied with 400V DC from a motor/generator down in the basement, when he had a miscommutation event. Every capacitor on his breadboard spewed its guts across the desk, a huge cloud of magic smoke erupted, and everything connected to the DC supply went ominously dark. A few minutes later, the phone rang, it was the Chief Electrical Engineer, demanding to know who had crowbarred the 400V, apparently the DC generator had locked up, ripped itself off its pedestal, was now rotating bodily, supported by the AC motor's shaft, and flinging bits of itself in all directions, including towards the control panel, which gave up the ghost and tripped the 415V AC incomer. He was instructed to protect the DC supply by installing water fuses before performing any further development on the platform.
Phoned a mate of mine this morning, he used to work at the same place referred to in my initial post. He explained that a Water Fuse is an extremely fast blow type of fuse, where the fuse wire is grossly underspecced for its duty, but is immersed in water to stop it from reaching a temperature which would cause it to blow. If the temperature of the wire exceeds 100°C, a skin of insulating gas forms around the wire and allows rapid thermal runaway, leading to an instantaneous rupture, and therefor protects the circuit. For example, a 100A fuse wire will carry over 500A if immersed in water (de-ionised is best), but will blow immediately the insulating layer forms at (say) 510A. The 100A wire is now uncooled, and is carrying over five times its rated current, so - BOOM!
Similar thing happened to me in the UK some decades ago, but to an entire house.
Been a teleworker for most of my career so was happily working away at home when the magic smoke suddenly appeared from every TV in the house (they weren't even turned on), everything electrical ceased to function and the entire house was literally humming.
Cue desperate sprint downstairs to throw the main power switch, thankfully before the whole house wiring had melted down. Narrowly avoided the need for a change of underwear as well.
Turns out that water ingress into a junction box in the road led to next door's live phase of the mains being connected to the neutral of our house. So every circuit in the house must have ramped up to 415v and its arrival via the neutral seemed to bypass any fuses - this was in the days when the house still had an old-fashioned fusebox, none of that ELCB frippery :-)
Had to replace every electrical item in the house if I recall correctly, but at least the house didn't burn down - not sure how long it would have soaked up the over-voltage if I'd not been home.
That's actually a common problem - though your description isn't quite accurate.
What actually happens is that the distribution network has three phases which all share the same neutral wire. If the loads on the 3 phases are all the same, then the neutral carries no current - the worst case* is that only one phase is loaded in which case the neutral carries the same current as that phase.
If that shared neutral connection is broken, then downstream of the break, the relative voltage between any phase and what should be the neutral is determined by the difference in load on the phases. Whichever phase has the largest connected load will pull the neutral its way and see a reduced voltage - while the other phases will see increased voltages. 415V would be rare, but certainly well above the nominal 240V is likely.
If you are struggling to follow, draw yourself an equilateral (3 sides all the same length) triangle. The 3 corners represent the 3 phases, and the sides represent the phase-phase voltage (415V for us). Put a dot in the middle to represent the neutral, and lines from the corners to the dots represent the phase-neutral voltage at each house - nominally 240V. Now imagine that dot ceases to be anchored in the middle, but is now puled three ways by the loads on the 3 phases - the phase with the highest load will get it closer, and that makes it further (= higher voltage) from the other 2 corners to the floating neutral.
Causes include physical failures/damage to a cable or joint - or [insert your own expletive] people stealing the neutral link at the local substation to weigh it in for a few quid and obvious (and uncaring) of the havoc their action wreaks. Example, example, and think yourself lucky if it only blows up your TV etc.
And there's a closely related issue where everything that should be earthed in your house becomes live, and the resulting currents in earth cables or services like gas pipes can cause fires (probably what happened in the third of those examples above).
* Technically, in the presence of lots of harmonics, the neutral current can exceed the phase currents as 3rd harmonic currents don't cancel in the same way as the fundamental 50/60Hz - but that's not relevant to this post.
I have actually seen a circuit where some moron had come up with the idea to put a fuse in the neutral line of a 3 phase service.
And yes, the reason I know this is because I was called in to work out why radios had blown up, which the above post already explains other than that in 3 phase, the signal is 120º out of phase so you get approx 380VAC over two phases. An electric heater on one phase and aforementioned radio on the other - the moment the neutral fuse pops the radio briefly changes into a breadless toaster and either dies or catches fire. Thanfully it opted at least for Walhalla without the flames.
I merely pointed at the fuse and told them to get a proper electrician in to go over ALL of it - one bit of stupidity tends to be a warning that there's almost certainly more.
Fair enough, thanks for the details - that was how the guy from the electricity board explained it to me as a member of joe public. Just before they decided that they weren't going to compensate us for all the broken stuff as they had originally promised.
One thing I do know - having your own house do an impression of the Tardis under load while you're inside it is an experience that I wouldn't like to repeat!
In one case, the generator ran exactly as planned during every test run, except there was one small issue that was not detected. The fuel feed was for some reason fitted with a multi-way tap. This allowed the engine to draw fuel from a local header tank or the main tank. The reason for this set up escapes me at the moment. Of course, all those test runs had not exactly refilled the small header tank. When the mains power went on holiday, the backup generator kicked in, ran for a few moments before spluttering to a halt. Diesel engines do not like being run dry of fuel, so even when the problem tap setting was found, it took a while to prime the engine and restore matters.
One crisp winter morning my Trabant expired a minute after leaving town with a boot full of groceries. As luck would have it this happened within coasting distance of a bus stop so I was able to get the car safely off the road. After checking that the fuel tap was open (it was) I popped the bonnet so I could check how much fuel was in the tank.
I unscrewed the filler cap and heard a sound like the one you get when you open a can of pop as air rushed into the tank. The fuel tank on a Trabant is vented through a tiny hole in the filler cap which on inspection was plugged by a tiny piece of ice.
As the fuel tank is mounted above the engine it is subjected to large temperature changes. In winter any condensation in the tank (after melting) will try to escape via the vent hole in the filler cap as the tank warms up. When you start driving cold air is forced under the bonnet and over the filler cap which can turn water in the vent hole back into ice.
I normally have a pin for adjusting windscreen washers stuck in the windscreen rubber on the inside of the driver's windscreen pillar. It turned out to be just the right tool for de-icing Trabant petrol cap vent holes.
"This allowed the engine to draw fuel from a local header tank or the main tank."
At bigger blue, the second smaller tank was to provide enough fuel to run the system should a power failure occur right in the middle of the twice-yearly flush & fill of the main tank. The "small" tank was only 300 gallons, the large tank was in the 25,000 gallon range (both numbers from memory). We had a small fleet of trucks in to pump the old fuel out, and then the new fuel in. Took pretty much all day.
Most modern diesel generators have self-bleeding fuel systems. Thankfully.
This allowed the engine to draw fuel from a local header tank or the main tank. The reason for this set up escapes me at the moment.
It's a backup for when the main tank is drained for cleaning, when water is removed or when it is being filled. When you mess with fuel in any way you have to leave it be for a bit to allow any sediment to settle, after which you bring the main tank back online. The same maintenance also applies to the header tank.
This way you keep the generator online and keep filters clean which adds a few nines to your theoretical uptime (Murphy's Law will strip them again, of course, but it's the thought that counts/certifies :) ).
"Or perhaps the facilities engineer used the antics of his feathered friends to cover up a cock-up of his own."
Yes. The switch gear for this kind of thing is protected from the elements, and birds. The automatic versions don't have any moving parts on the outside, and all access is (supposed to be) sealed. Mine don't even get mud-wasp nests in them. More likely he forgot to wire the automatic bit.
 Ive installed many manual versions that have a three position switch that can be thrown from the outside, usually labeled "PG&E", "OFF" and "Generator" from top to bottom. Each position has space for a lock, to keep prying hands from killing you. This switch disappears into a weather (and bird) proof box, where all the mechanical bits are contained, safely away from bird poop, insect damage, ice storms, and the like.
Strictly, the claim isn't that birds got into the important switch. The bird poop got into the important switch. The birds were somewhere above, probably right on top of the important box that importantly kept the important switch free of rain and snow, but not of poop. Which, it turns out, was important.
Maybe it was warm. Elsewhere than here, I've seen reporting that Starlink satellite dishes in use are apparently being found cosy by cats, which doesn't improve their function.
Not quite the same thing but once worked for a company and the air con in one of the server rooms failed at some point over the weekend and wasn't detected till everyone was in the office on Monday (we had several server rooms and we weren't allowed to know which server room held which servers so might have been developer or tester only stuff and thus not important enough to have real time monitoring). Do know everything in the room was fried and had to be replaced. Expensive lesson in cost saving/disaster planning.
One financial institution I worked at many years ago decided to do a generator test late in the evening midweek - I was on shift that evening.
The failvoer goes Mains -> Battery -> Generator
On this test it went something like Mains -> Battery -> Generator (fails to start, everything goes black) -> Back to battery (everything comes on) -> back to generator (everything goes black) -> back to battery everything comes on -> batteries give out -> everything goes black and stays black.
No emergency lighting in the DC where the bridge was it turns out and no outside lighting to shine in thru the windows as there were none.
The hilarity, the mainframe batch runs that were in the progress at the time, the chaos - fun times.
So issues with the generators and lots of issues with failed batteries, lots of root cause analysis, lots of beefing up batteries, failover procedures etc...
There was fun at one bank where the generator fuel tanks spung a leak and flooded the basement canteen to a depth of 3 feet, they never did manage to paint over the diesel tide mark enough and their was a odour of diesel for the rest of the time I was there which never quite left.
I remember in the late 90's a DR test that was supposed to go Mains -> Battery -> Generator
But instead went Mains -> Battery -> Extremely loud bang in Generator -> fire department and full building evacuation (and of some adjacent buildings, being in an old part of town) -> 2 days to go back online
The company next to ours (Netcracker I think) had a genny in an outbuilding that they tested every week.
This thing was noxious on startup and filled the car park with fumes. If you arrived at the wrong time you'd have a mad dash to get inside without inhaling, or else you'd retch up. It was worse for me arriving with the top down on the car - and nothing at all to do with having had a skinful the previous night of course!
Sometimes if the wind was in the wrong direction we could even smell it INSIDE our building, so after a few months of this we persuaded them to shift their test to 0700 before we all turned up.
The irony is - when we actually DID have a local power grid failure - they didn't even use it - or couldn't start it anyway.
I heard this story from a line technician with a local distribution company. They would register a very perculiar failure on one of their local 22 kV lines around the same date every year. It was a sudden potential drop very early in the morning, not correlated and not looking like any other typical failures they would see. It went for some years with inreasingly more targeted observations every year (and correspondingly more intense expectations). Once they got a young hunter on the team, and he agreed to actually go and observe one suspect section of the line at the usual time the potential drop was expected to happen. He would go and camp right next to the line for a week, before he would be presented with the clearest of explanations. The line became a temporary home to a huge flock of migrating birds who spent the night sleeping on the line. At the very instant the sun rose, every bird woke up at exactly the same time and took a dump. The number of those birds along the line was so huge that the, erm, potential drop they collectively undertook actually registered on the line diagnostics.
In 2000, a stork got itself electrocuted and managed to make half of Portugal, including the capital, go dark for a couple of hours (in Portuguese), because of cascading failures on the power network (allegedly due to a lack in infrastructure and security investment)
Happened at a company I worked at. We went through two rounds of redundancies, dropping from 120 people to about 80. And then two weeks later to 79 when our HR person was made redundant "as the size of the company meant we could no longer support full-time HR". HR was then provided remotely by the HR person at a sister company elsewhere in the country.
After having been told by the head of PR that my lack of experience in SQL meant I wasn't suited for new role A and my application for a HoD post was aspirational and that the only role I would be considered for carried a drop of over £17kpa I sadly waved goodbye to the organisation pocketing a £12k severance and having used the prolonged restructuring period to go job hunting. I was one of the last out of the door which did unfortunately mean the leaving party and whip round amounted to no party and a £5 book token.
Around 8 weeks later I bumped into former head of PR in a tube station. She asked how I was and if I had found a new post. I told her I had - I was now HoD with special responsibility for repairing and optimising a SQL solution that the external developers had made a total mess of. Plus I had taken a £14kpa uplift in pay and had a 5 year guarantee on my contract. Being polite I asked how she was. They'd got rid of her post two weeks after I had left, was unsuccessful in applying for the new post but didn't mind as that was a drop of £23kpa and she was just heading for an interview for another post as a deputy head of HR at the same pay. I didn't see her again and don't know what became of her. And frankly I don't care!
I had a boss who said that my job was safe, but that I had to make all of my team redundant. (My team and I worked in a remote office, with my boss at the head office.)
That was probably the worst morning I have ever experienced, telling each of my team in turn and escorting them to a waiting room! (They were allowed to remove their personal belongings en-mass, after all the IT had been removed from the office.)
When I had finished, my boss thanked me for doing that horrible task, and then said that he had only told me my job was safe to make me do the talking to my team and that I was now redundant as well. (That was the whole telephone conversation!)
I could have done with one of these for him ===>
To be fair, cutting your HR team at the same time as the other cuts would be a bad move for everyone, whether leaving or being shuffled. Instead the HR staff get to deliver lots of bad news and get loads of resentment from people that don't see HR being cut, while knowing full well that when the rush is over, they're for the chop.
I worked at a company when they outsourced their HR.
It sucked. I had a confidential "what if" question (related to a medical test) and instead of walking over to talk with someone who knew me and whom I knew would not take unnecessary notes, I had to call a number, identify myself by name and employee ID, and they were required to log the conversation in my file. Since everything was "on the record" there was no honest advice, only avoiding liability.
That gave me a new appreciation for the many HR people who find ways to genuinely help employees (within the limits of their job). And yes, termination is part of their job, just as automating manual tasks is part of mine. The job comes with a sword, and doesn't require that you enjoy swinging it.
Many years ago - pre-IT days - I was working at a local radio broadcasting station. The place had lots and lots of lead-acid glass cells in a huge battery room to maintain DC supplies in the event of mains loss, and a backup mains generator of 30kW or so built onto the back of a Land Rover.
In the event of power loss, first talk to the power supply company to see if it was likely to take some time to fix, and then drive the Landie round to the power inlet and plug it in.
Came the day of a test, and the junior engineer appointed to this task disappeared for some time and returned to announce that neither he, nor any of his immediate colleagues, could start the Land Rover... oh dear, bit of a problem there.
After the post test analysis, a placard was affixed in front of the steering wheel announcing "This vehicle has a choke. Pull out before starting".
I must say I don't miss the days of manual chokes (showing my age) or manually adjusting carburettor for different weather conditions / temperatures quite a few times each tear.
Modern vehicles are so much less hassle to start (though conversely, when they do go wrong its a "garage job" with all the electronics involved)
One engineering firm I worked for was located in 'spare' farm buildings. This 'farm' was a major agribusiness and had massive grain stores, under which there was a complex network of pipes to blast air up through the floor to dry the grain. The dryer system itself was located in a specially built blockhouse with all sorts of safety features built in, including a large wire cage over the air intake - something like a 2M cube. When the system started up it sounded like a plane taking off. First auxiliary fans were started up on a standard star-delta time switch, then once the pressure reached a certain level the main fans were started by a massive contactor block - none of this was enclosed, after all nobody could be there on powerup. Only, one day there was a hell of a racket, then everything went quiet. Panicked farmers came running over to us for help. Our most experienced guy happened to be in the office so he went over to investigate. Apparently the main fan motor was still smoking (but amazingly survived) and the contactor block had one pole completely burned away. There were mouse droppings all over the place, and the guess was that some had dropped between the contacts of that pole causing it to arc.
Oh, and those farmers were not exactly poor. They paid for a new contactor block to be sent over from Germany by courier within 24 hours.
From my experience with home insurance: They covered everything, except for the thing that broke. If your roof leaks and you fix it before it causes real damage, they don't pay for the roof repair. Happened to me. If a 10 pound part in your bathroom breaks, and water leaks downstairs slowly, causing £5,000 damage before it is noticed, they pay for everything except the £10 part - happened to me as well.
Many years ago one of the main office buildings for the company I worked for at the time was being refurbished. Part of the refurbishment was a major upgrade to the UPS/Generator setup, partly to support the computer room being upgraded to a mini data centre, but also because the building was to house some other critical 24x7 operations. Most of the staff could be sent home in the event of a major power failure but the servers had to keep running and the 24x7 operations staff could not easily be relocated. So a system was designed that couldn't power the entire building but could keep the important stuff running for up to 10 days.
We had done various system tests but there is nothing like a full test with real people and real workloads so soon after the building was commissioned we planned a full 24 hour test of the emergency systems one weekend.
Everything went well - the UPS and then the generator swung into action as designed and the 24x7 staff never even noticed the lights flicker. Every test we did passed with flying colours and both IT and the facilities management team were just beginning to pat ourselves on the back when we got the call that caused the immediate abandonment of the entire test.
It seemed that one little system had been left off the emergency power circuits - the electric flush mechanisms for the toilets!
Oh well back to the drawing board - on the other hand a 2nd weekend of OT a month later.
I gather that the point of a toilet "macerator" is to allow installation with a smaller and/or more permissible drain pipe so that the toilet can be fitted somewhere that a modern toilet wasn't originally planned to be.
A pump is mentioned in https://www.manomano.co.uk/advice/macerator-toilet-buying-guide-3096 as well.
I worked for a while in a 100 year old former school building where something like the macerator pump was evidently fitted in each modern toilet cubicle. ManoMano says "Macerator pumps start up automatically when the flush is pulled." Our ones tended to start up randomly while you were seated, but I don't remember any unfortunate consequences besides surprise. ManoMano also says that one macerator can process multiple inputs, but I don't think we had that. The actual flush was perfectly ordinary, but when we had power cuts, which we did, I think only the non-macerating cubicles on the outside wall were to be used.
I also once used a public toilet where flush was activated by a wall button that was more or less behind my left shoulder blade: I flushed myself several times before I realised what was happening. At first I thought it was the operation of a mechanism to deter long stayers.
We had UPSs for the servers with about an hour or so of battery life, enough to allow for short-term power failures and give us enough time to shut down gracefully.
One day the power went off and stayed off as we watched the % capacity numbers on the UPS spool down, hoping we wouldn't need to shut the servers down because a chip tapeout was imminent and this would put us back be several days. Unfortunately didn't happen, it was several hours before the power came back on.
The problem turned out to be in the local substation (11kV? 33kV?) supplying the industrial estate, where it seems that an inquisitive cat had found some nice warm busbars to sit on (it was January) with inevitable consequences. Leastways they think it was a cat, difficult to tell from what was essentially charcoal...
Place where my parents rented for a while was on the same circuit as Rugby Radio Station, from where the Polaris nuclear submarines were controlled. During the six or so years I called that flat "Home", we never had a power cut. We were so near the transmitters that if you switched off a fluorescent light fitting, it would continue to glow for several minutes, slowly fading away to darkness.
Didn't have a switch at all for anything to crap on. It was broken on delivery of the UPS system, and was *still* broken when the facility went live several months later. The only way the generator could be started was by sticking a screwdriver into the carcass of the switch and wiggling it around.
When the inevitable power failure happened, we had to find our way to the UPS room, breaking several doors to do so as the default was to lock on a failure and some bright spark decided the only access was THROUGH the bloody machine room protected by these doors.
To make life more interesting, this was at night, before the days of a mobile phone with a light, and we only had one very dim torch. Best night-shift I ever had!
Before that, when working for another company, we did generator tests every Friday. The trick was to try NOT to get shut in by the shift-leader once the 12 cylinder marine diesel engine kicked in. Start it up, look each other in the eye, then bolt for the door....
Even more amusing was that although we tested that damned thing every week, the fuel level was determined by using a dip-stick - and the first time we used that damned generator it ran out of fuel!
I did work one place where the power cut, there was the strangedly sound of a generator trying to start and then nothing.
Upon inspection (luckily by some other unfortunate) it was found that a family of rats had nested right in the drive belt are, so the genny tried starting just to jam on the bodies of Mr & Mrs and the little ratlets.
Like I say, that was someone else's discovery to enjoy, not mine..
A build where I used to work is a skyscraper on the banks of the Thames. One of our users told a story of when they first introduced proper server rooms in the company. They built state of the art (for the time) server room, with state of the art security, comms and more than enough power. In short, it was a good server room. When they refitted the lifts in the building, they put the motor room next to it. They also put the phone and power lines in another room on the same floor.
The significance of this will become clear.
The slight hitch is that they built both in the basement of the building, and apparently cut corners on sealing the floor. Anyone who has a basement will tell you that you can suffer problems with damp if you don't properly seal rooms. These two rooms where less that 20 meters from the Thames. When the river was slightly raised, the basement would flood, sometimes taking out the phones, lifts, electricity and servers.
By the time I worked there, the phones, networking, electricity intake and server room had all been moved elsewhere. The lift motor room was still in the basement, which wasn't ideal, but I suspect that moved when the building was fully refitted.
Still this was the same company that built a really nice new Gents toilet on the first floor of another building. A toilet that was only accessible via a moderately tall staircase.
As a proud affiliate of a data closet that has a shiny new (utility-supplied gas fueled) generator about to be commissioned, a UPS with runtime on the order of hours and a mains-only chiller, I need some new acronyms for this kind of clusterfsck.
PODOGO Power Out Data On Generator Out ?
POGODO Power Out Gas Out Data Out?
POGOF Power Out Gas Out Fscked?
Got their comeuppance
Old friend told me this... They had a air filtration system that sucked a lot of air(and dust) out of the factory
To save money during winter the air was re-circulated via the return fan and duct work so that the heat did'nt get away... and during the summer the vent was changed over to dump hot factory air outside and suck cool air in.
On this day, it had been in winter mode..... and then a bright spark says "Lets cool off a bit" , walked to the control panel and hit the change over switch.
And the the vent changed over.
We think the pigeons were roosting there because they had got used to the noise.
10 seconds after change over it began snowing inside........ then they noticed it was'nt snow.
So whenever you're scraping bird poop off electrical equipment/your windscreen...... that factory had revenge for you
Back in the 70's a small 2000 line telephone exchange served the residents, nay subscribers, of the area.
A gas leak had occurred some weeks earlier under the Main Street. This had gone unnoticed.
The flammable gas instead decided to go on an excursion and managed to infiltrate the adjacent telephone cable ducts and headed towards the single story brick built exchange.
Now normally the ducts are sealed in the cable chamber to prevent gas infiltration, on this occasion a seal was not fitted to a duct.
The gas poured into the unstaffed exchange underground chamber and then found it's way to the ground floor, housing literally tons of automatic (electro-mechanical) switching equipment....ah the good old days.
The gas and air mixed to the correct explosive ratio. Aunt Maud (regomised) lifted her handset in the Anne Boleyn Tea Rooms and a tiny spark from the exchange equipment completed the final call.
The whole front of the exchange blew out, the concrete ceiling of the apparatus room caved in on the equipment and all 2000 subscribers were cut off. No 999 calls to warn of a small nuclear sounding explosion could be made and no Dial-A-Disc or up to date Cricket-Line call could be made (though to some of us nowadays that would be a blessing.
It took some 8 weeks to run cables and connect to other exchanges to restore a resemblance of service to the Anne Boleyn Tea Rooms and the other 1990 or so other subs.
The good thing in all of this was A: Nobody was hurt. B. The sound of the exchange emergency generator could be heard chugging away to generate power that nobody wanted.
Heh- we have quarterly checks on one of our gennys by the vendor's service tentacle.
Went to run the last test back in September, and was surprised when the genny failed entirely to start.
turns out the chargers (both of them!) failed (in different ways, thankfully!), and the battery set completely died.
two hours, a fresh set of batteries from the local dealer, and one frankensteined charger later, the vendor got it working again.
last month, we got new chargers put in, and a week ago we had another quarterly check, with passed with flying colors.
When Santa Cruz Operation ("SCO") Unix and Xenix was a going thing, I was doing software and hardware support. One of our company's clients was running a vertically-integrated app for nursing homes. This app ran on Xenix. The client had a 80386-based PC which was connected to various character-based terminals and printers through serial lines provided by a 16-port serial board.
I observed there was no uninterruptable power supply in the system, and recommended that they get one. They ordered it, it arrived, I installed it, and left it to charge for a week.
After it had charged, I returned, and explained/demonstrated the monthly test routine to the local person-in-charge of the system, a secretary.
"First, go to the system console here." [PC keyboard + monitor]
"Second, make sure everyone else is logged out of the system. You can check this using the 'w' command at the dollar-sign prompt."
"Third, stop Melyx [the app] using its menus."
"Fourth, log out out of Xenix."
"Fifth, log back in as 'root'."
"Sixth, at the pound-sign prompt, do a tape backup as shown here on page one of the Operators' Notes notebook."
"Seventh, at the pound-sign prompt, do a system shut-down as shown here on page one of the Operators' Notes notebook."
"Eighth, when you see the "** Safe to Power Off **" message on the screen, get down on the floor like I am right now, grasp the U.P.S.' power plug, and just pull it out, like this:"
I pulled out the plug.
We heard a "CHUNK", saw the PC's monitor and lights all go out, then heard the "WRRNNNNnnnnn...." of a spinning-down hard drive.
I turned to her and said, "And that's why we test these things."
"Ninth, plug it back in, then turn off the PC with the Big Red Switch on the side."
A phone call to the U.P.S. manufacturer and a short conversation got a replacement battery shipped out. After the new battery arrived and I had swapped it for the bad one, gotten the new battery charged up, etc., I had *her* do the test procedure, which I'd written up and placed in the Operators' Notes notebook.
Once upon a decade or so, I was part of a team that managed two hosted telco comms centres in London from, literally, the far side of the planet.
These had been very carefully designed, with a UPS to cover the period between the mains failing and the facility's generators coming on line.
The problem - the Mains Fail alarm was connected to the UPS output not the UPS input, so on the day the mains supply failed to the hosted site but not the hosting facility (ie., the generators saw no issue, so did not start) we didn't know anything was wrong until the UPS batteries went flat. Nothing, other than the UPS, was connected to raw mains and the UPS didn't have it's Input Fail alarm connected.
This happened twice and we could never persuade the hosted site owners to have the alarm scheme revised - it would not surprise me if the situation was the same today, many years later.
My story about a power outage is from the mid 2000s. The company I was working for was the largest distributor of hard back books in North America. (You can use that to figure out who they were and are long gone.) It was early September in southern California and temperatures were in their typical high 80s low 90s Fahrenheit. I was walking into the building following the facilities person and watched her step over the non-trivial flow of water coming out of a drainage pipe from the building. Did she care that water was draining from the roof during the hottest and driest part of the year? No she did not. Did I, well I should have but my mind was at that moment focused elsewhere (yes I am a dog.) The water was coming from a cracked valve in the HVAC system's chilled water loop. Once the loop drained itself sufficiently that afternoon, the cooling system shut down. (The computer room was on a separate closed loop system.) The repair people were there by end of day, diagnosed the problem, determined that a replacement part had to be shipped in, but should be installed by the end of the following day. They were able to manually set the system to blow air to bring fresh, if hot, air in and office work continued. The building is a 6 story sealed glass box (only slightly higher than it is wide) with no windows and only ground floor doors (and one roof door) that open, so those units were the only way to bring fresh air into the building. However it took 2 full days to acquire the part and another half day to replace the broken one. By this point, most every employee had brought in a personal fan. The repair crew filled and bled the chilled water loop, then reconfigured the system to normal operation which enabled the individual heat pumps, all of them, to turn on, all at the same time. Not a single circuit breaker opened due to the load, not the individual lines, not the 300 Amp breakers for each floor. The building fuses, 2500 Amp on all 3 phases, did however blow with a non-subtle bang.
Our computer room UPSs were constant on, took up their load without issue, and all the computer room machines stayed up without a hitch. The standby generator was heard to start, and just as it had in testing, it warmed itself up for 30 seconds, and was heard to lug down as it took the load on. Then it speed back up as it shed the load, waited 15 seconds, and tried to take the load on again, and again, repeating this while we all panicked and tried to diagnose the issue with the generator. The reason in hindsight was simple, in addition to supplying power to the computer room, the generator also supplied power to the order processing floor. (To enable continued operation in the event of a long term power outage.) That floor in addition to the computers and CRT monitors also now had almost one fan per employee and the startup load of all of those motors combined with the CRTs was too much for the generator. We, the IT department, did not diagnose the issue in time to prevent the UPSs from exhausting their batteries which instead of supplying the indicated 20 minutes of run time produced closer to 7 minutes when actually being drained. (They did not perform self tests of their battery system, the indicated time was based solely on load and the expected battery capacity, and the batteries were well beyond their expected lifespan.) The batteries on the storage array caches also failed, so we ended up with every single storage system on all of our Digital Equipment Corporation systems being corrupted. It took a half day to get temporary fuses installed and two days to restore databases from backup and roll them forward with transaction logs. The only saving grace was that the power went down on a Thursday and weekly automated resupply orders were calculated based on sales data through Sunday which was received overnight Sunday-Monday and by that point those systems were fully functional. It took a full week to get everything up and functioning but our customers never knew.
So we reviewed the lessons learned. We were thankful for the correctly functioning backup strategy, we replaced all the batteries in the UPSs and storage array caches, documented the schedule for their replacement, reviewed everything the standby generator was supplying and shed about 30% of the non IT load, reviewed where the floor breakers were that the generator supplied, confirmed (with an electrician) it was safe for mere IT mortals to open them if needed, and documented that for all the IT people (we were staffed 7/20 with 4 dark hours). We also used the scheduled power outage to install permanent building fuses 6 weeks after the incident to load test the generator and new UPS batteries (with the systems running but databases shut down.) Long term a system was to be added to stage the load picked up by the standby generator (instead of picking it all up at once) with the ability to leave lower priority loads off to preserve higher priority loads, but the business entered into financial difficulty and went bankrupt before that was done.
The interesting part, every single one of the IT people (myself included) stepped over that flow of water that morning and none of us questioned it. If we had caught it, it probably wouldn't have helped but we all ignored it.
Do ensure that all of your essential systems are covered by the UPS, DR processes, backups, even documentation...
I did once hear of heartwarming successful UPS tests, building power-cycles and application recovery tests for a company - BUT when the power to an office building cut out they discovered one particularly important infrastructure server was still sat under somebody's desk from 15 years previously and had been forgotten about. Bringing down everything else had never tested this part because up until that point it had stayed operational.
As for loud bangs and flashes, our house was once stuck by lightning. The land line phone was blown right up the stairs as was one neighbour's phone. Clocks blew up as did the TV. I saw plasma glowing around bedside lamps. There was a 3 foot spark from the phone system to the central heating system, which told me that simply switching off or even unplugging probably isn't enough. I even distrust lightning surge protectors now, although to be fair if someone else's house in the street is hit it may help. Our strike even blew up somebody's expensive HiFi valve-based amplifier which was left on permanently - he lived 6 houses away. What saved us was the cast iron Victorian guttering to ground. The only building infrastructure to need replacing was a vertical line of roof tiles from ridge to gutter.
Our little village was fed by a single pole transformer in the field along the side road from the main road (A5). One evening there was a thunderstorm in progress, when - BANG - and the whole village went dark. The pole transformer had been struck by lightning and was now hanging by a few strands of wire from its pole. We were without power for over a week while a new transformer was installed and cabled up.
I was with my wife who was on a ventilator in intensive care when the power went out. I was impressed with the nurse running to her with a manual mask as the lights came back on. The ward sister explained a few minutes later, having come off the phone, that it had been the site management testing the backup generator and that they are meant to let them know in advance. 30 minutes later it happened again as they switched back, again without warning ICU.
Anyone remember the CAPITA/Vodafone (Newbury, UK) data centre outage back in 2005.
A Planned 10am UPS failover test for Apollo House data centre didn't quite go to plan.
(Lights go off on the dot to 10am, lights stay off, lights DONT come back on, lots of running feet heard downstairs, banging doors, shouting.... lots of sniggering upstairs, mostly from the guys running their workstations on their own deskside UPS and patched into the WAN via their shiny new Vf 3G phones)
Resulted in everyone non essential being kicked out to head office up the road or sent home.
Days of datacentre walls being rectified by percussive adjustment to fit 5inch thick 3phase LVAC cables, emergency 40ft UPS diesel generators being delivered to site, Microsoft HQ (Reading) first responders being sent to site. lots of data recovery, heads rolling, and lots and lots of angry phonecalls by both management and customers.
Ahhhh... the joys of working for ...... ;p