I was told that despite being an intern, I should have known that the server named ‘Orion' had the label ‘NS1'," Wayne told Who, Me?
Sounds like they were a bunch of Wayne Carrs
Returning to work on Monday morning can feel like a mistake, which is why The Register welcomes readers back to their desks with a fresh instalment of Who, Me? It's the reader-contributed column in which we tell your tales of making a mess at work, and somehow surviving. This week, meet a reader we'll Regomize as "Wayne," who …
Doesn't everyone have an on prem. secondary DNS server ?
And more than one forwarder ?
Learn pretty early on to never trust labels. Most systems you can blink LEDs from the OS or eject the CD caddy.
Hostnames (/bin/hostname) can be suspect too as having a server and its replacement concurrently sharing the same name isn't unknown - check the IP and OS/software versions.
Just unplugging the network cables from the doomed host, walking around office and returning half an hour later to deliver the coup de grâce will usually pick up cases of mistaken identity.
The inconvenient truth is that many sites, and particularly those where the developers rule the roost, have little idea of what runs where or indeed why.
All well and good, but if you were explicitly told which hardware to turn off you don't actually care what services it's running.
Also, KVM switches are notoriously mis-labelled as well. The chances of a server physically replacing a different server but retaining the labels on both the power cord and the KVM switch... pretty high
At a place I worked, every tech "knew" that host "steel" was our primary DNS server, and that host "sellwood" was our secondary.
However, we never had hardware interns, only software interns. Hardware techies were never allowed into server rooms or telecom closets until they had proved to the tech lead that said techie had mastered our institutional tech-knowledge, and had consistantly displayed a good, non-cowboy-like attitude.
During the good era there, managers were denied access to server rooms, etc.
During the less-good era there ... one day I found myself in a server room with five managers (Why me? My manager's whim) for a server shutdown/photo-op. The server in question was a DEC Alpha, the only one we had. Checking the back, I noticed someone had already removed the network patch cable.
With the five managers' and my index fingers on the power button, it was "Say 'cheese', one, two, three!"-photo-flash-power-button-*click*-polite-applause.
Nothing bad happened, but I still disliked the situation.
I was part of a project to replace a large, obsolete piece of telecoms kit. The picture showing the big-wig cutting the large multicore cable was total fabrication. It was just a piece of multicore laying around not connected to anything as the actual equipment had been removed weeks ago.
Never trust a press release.
You'll be telling us next that the big fancy switch that the various "celebrity" non-entities use to turn on the Christmas lights in some random town is actually connected to anything, rather than just being the cue for some underpaid tech to actually close a breaker somewhere less obvious and gaudy...
> You'll be telling us next that the big fancy switch that the various "celebrity" non-entities use to turn on the Christmas lights in some random town is actually connected to anything, rather than just being the cue for some underpaid tech to actually close a breaker somewhere less obvious and gaudy...
I heard a story, which I cannot find now. Instead of a turn-off it was a turn-on. Dam-size electric generators are often worked in multiples. They HAVE to be very nearly in-sync (phase angle and phase sequence) when cutting another generator into the array. A "Synchroscope" (or lamp) tells you when it is safe. As you guessed, Manglement was very proud of their new dynamo and made a celebration of it. The tech extensively coached VIP on when to throw the lever. You know what happened. If you get it 180 wrong it is a double maximum fault. The kilo-ton alternator threw itself off its heavy concrete foundation. Months and millions lost.
Apparently there was a good side. The real switch is small and plain. The techs went into the scrap and built a Very Impressive And Shiny dummy pedestal switch for such events. And rented it to adjacent power companies for their cut-in events. Natch the shy grey-shirt tech with his/her hand in the back of the console in the back of the room is the real turn-on.
Ha. We did that for real in 1996 as part of the pre-pre-opening ceremonies for the Summer Olympics. In fact, the fault-detection-and-failover as the bolt cutters did their work took fewer milliseconds during the actual broadcast than it did during the full dress rehersal.
I was once filmed moving coloured liquid (picked randomly from our stock of culture medium) from one microtitre plate to another with a Gilson pipette for a local tv news spot about HIV testing. It was filmed in a spare bit of lab because the camera crew weren't allowed in the Cat 3 lab. I did wear all the protective gear though.
(and my mum taped the segment off the telly to show to my aunties etc.)
--> me changing out of my howie coat into a green gown and plastic apron -->
I was part of a central government project to get seniors "on-line". The government gave our local club 3 Dell PCs on the understanding that we would keep them after the project finished (2 years). We installed them and got everything working. The Mayor, MP, senior local government officers, etc., showed up for the grand opening - along with a young female photographer from the local paper. I noted that when we were all lined up the "volunteers" were hidden behind the great and the good (which I expected). What I hadn't expected is that the photographer noted that the black Dells "didn't photograph well". She moved everybody so that we were in front of an old Mac mini, Apple keyboard and mouse, with an Acer screen that my wife had donated as she had moved her stuff over to an iPad. Yes, that was the photograph that was printed...
A manager at a company I worked at once yanked a disk out of a client's IBM Netfinity server while demoing to a prospective new client. Trying to show how amazing the hardware RAID card card was at ensuring the server stayed running while the array was degraded.
The server promptly fell over. RAID array wouldn't rebuild and the server needed restoring.
Yeah, I've had IT support for a large medical services company try to drop me in the deep end for their incompetence,
I don't remember the exact details now and even back then I was kind of amazed it was possible but they'd somehow managed to create a non redundant OS volume in the first part of a bunch of disks and a RAID 5 volume in the remainder (I know, I'm struggling to think how as well)
Pulling any one disk would, obviously to me, result in their DC falling flat on its arse and taking out a hospital network but their on call guy insisted it would be fine and I had to do it.
I protested, explained it, he got annoyed and insisted, strongly. Even got his manager on the line (he was not happy as it was early hours of the morning)
When it all, as expected, fell flat on its arse he then tried to blame me
Fortunately I'd insisted he show me the config first and emailed him, his and my manager with a screenshot and his demand to "just do what I tell you" with an explanation of why it was stupid.
I also won a bunch of plaudits because I brought it all back up online with zero data loss and a recovery plan to fix the non redundancy which they implemented across every site which had a DC using the same config
Sounds like the time we took over a customer from a previous IT support customer. They had an Exchange server with RAID 1 redundancy. Unfortunately, this redundant volume was small, so they put the databases on a USB attached external drive.
They were backing it up to the same drive as well.
Possibly my most heroic call while in the IBM Netfinity/xSeries post-sales front line: some guy called in a right panic, he knew nothing about computers, let alone servers, his company's IT guy was off on holidays, during which time he had been tasked with replacing the DDS tape in the company's sole server and he messed that up: He had yanked both hard drives out instead.
I told him I couldn't promise we would get it back online, but with both hard drives pulled out in near succession, there was a good chance of success. I sent an email with detailed instructions and asked him to call back so I could guide him through the process. So a while after, I was helping him create a DOS boot floppy with the RAID utility, put the first HDD back online, then the 2nd and resync the RAID 1.
And indeed the server was able to restart after that.
I don't know if he went on to tell the IT guy what had happened after he was back as I'd instructed him to, but as he'd done the right thing (call support, fess up, follow instructions very carefully) after the initial fuck up, I had hopes he did. I doubt he's a Register reader, which is a pity as it certainly would make for a good, Who? Me? story.
I'm fairly sure I've told this story before...
At a University (that shall not be named) that I used to work at, we were migrating the student records system from an old Burroughs system to VAX. On the last day before Christmas there was a decommissioning ceremony held in the Computing Centre machine room for the old Burroughs, attended by a number of the users of that system. One of the long-time users was given a large pair of scissors to cut up the 8" floppy boot disk. Another user got the privilege of turning off the main switch to the Burroughs...
Imagine the sounds of various hard drives spinning down... along with lights going out and drives spinning down *across the room* - somebody had forgotten that one of the new VAXen was on the same circuit as the Burroughs :-(
Sadly, when power was restored, the VAX didn't spring back into life, so a service call was required...
The inconvenient truth is that many sites, and particularly those where the developers rule the roost, have little idea of what runs where or indeed why.
DevOps is great in theory. The problem is most developers don't bother with the "Ops" part as it's not as sexy. So they spin up a load of services which slowly bitrot over time.
And they should not. The Ops part should be handle by "Operations", as the name implies. But most "Operations" people are so used to the old "fire and forget" mantra they can't and don't want to work is a far more dynamic environment.
It's not the developers' job to maintain servers and services, and DevOps is not a methodology to let sysadmins do even less work.
The article case is another specimen of sysadmins' laziness - why bother to update labels when "everyone" knows which server is which?
That was one of the fun things with Linux servers, even early - they didn't just have a command to open de CD drive tray, but also one to close it again so it was quite simple to set up a loop. This was before the days of virtualisation where you just had a herd of boxes to do the job.
Never had the chance to let this loose on all servers at once, I would have loved to see the effect. Missed chance..
:)
Your servers had optical drives? I suppose most did, decades ago, but it always seemed like a waste to me. A drive that's used once to install the OS, and then never again. (Software installs and even OS upgrades were already network-delivered, at least on our systems, by the mid-1990s.)
So ours were CD-less. Granted, they were mostly 3Com telephony rack systems that couldn't have fit an optical drive anyway. (The control unit had a PCMCIA card slot, its sole concession to removable media.)
What they DID have, tho, was a console command dedicated to manipulating the entire chassis' extensive complement of blinkenlights. (Not only could we flash the lights on whatever blade we needed a tech to pull, but we could even script a chase pattern pointing directly at its release lever. All they had to do was follow the bouncing LEDs.)
"Your servers had optical drives? I suppose most did, decades ago, but it always seemed like a waste to me. A drive that's used once to install the OS, and then never again. (Software installs and even OS upgrades were already network-delivered, at least on our systems, by the mid-1990s.)"
Which is great, until some idiot with a JCB cuts through your network cables in the road outside and your entire site goes 'dark'. Or if your main network controller goes down. Or your telecomms provider has an issue that crashes *their* systems and takes yours out in sympathy. Or any of a dozen nasty things that can happen and render your entire site useless. Even more so when all phones are VOIP and you can't even phone anyone to alert them.
Having everything onsite isn't perfect, but if everything is offsite then we don't really need to keep you, do we? We can run it somewhere else where costs are much lower...
"Doesn't everyone have an on prem. secondary DNS server ?"
No. That's a single point of failure. Putting all your authoritative DNS servers on the same physical/logical network or electricity supply is remarkably stupid. Suppose a PFY unplugs the wrong Cat-5 cables or a JCB cuts the building's power supply or the network switch fails or...
The master and secondary DNS servers are only supposed to be queried by resolving servers. Which cache the answers they get. If an authoritative DNS server goes away for a while, it won't make a difference to anyone making DNS lookups if they have a vaguely sane DNS setup. The resolving servers will be able to return their cached answers for a while, sometimes days or longer.
"And more than one forwarder ?
DNS forwarding is for idiots. And another easily avoided SPoF. Use proper resolving DNS servers.
In my (admittedly limited, in this case) experience, secondary DNS .... doesn't work.
Try it: on your (Linux) machine, set a primary and a secondary nameserver.
Now block the primary nameserver with iptables.
Queries go through UDP (or maybe TCP - but regardless), time-out, and everything dies. There's no attempt to use the secondary nameserver. I honestly can't figure out why it's there. Maybe there's a way to configure nameservers with a shorter time-out than the whole DNS resolution process, I dunno.
It works fine across many other OSs.
The timeout from the resolver process trying to contact primary resolver is definitely noticable, so it is always best to try to keep uptime of primary resolver up there (dnsdist works great). (ie. don't get into the mind set of oh, its just one of many, we'll get to it when we do). There will be some process issues hanging on DNS resolution timing out.
It wouldn't surprise me that Linux systems never fail over.
In my (admittedly limited, in this case) experience, secondary DNS .... doesn't work.
You need to get more. Secondary DNS service works just fine. Just ask any of the thousands of DNS providers and ISPs who use it. And have been using it ever since DNS was invented 30+ years ago. And their many millions of customers who rely on secondary DNS service to reach $website-du-jour every second of every day. In fact, for pretty much all of the most popular content providers, web sites and TLDs, ALL of their public-facing authoritative DNS servers are secondaries.
Try it: on your (Linux) machine, set a primary and a secondary nameserver.
You're supposed to put these on different boxes/VMs. Ideally in different nets in different locations. if you put them on the same box and then block incoming traffic to port 53, it shouldn't come as a surprise if neither server answers because the queries never reach them.
Maybe there's a way to configure nameservers with a shorter time-out than the whole DNS resolution process, I dunno.
There generally isn't. (Though I suppose all bets are off if you're dealing with Poettering's abominations against humanity.) Those sorts of configuration hooks aren't needed anyway if you setup DNS correctly. You also seem to be confusing the three different components of DNS service - authoritative, resolution and stub resolver behaviour - without understanding the differences between them and how they interact with each other.
"Queries go through UDP (or maybe TCP - but regardless), time-out, and everything dies. There's no attempt to use the secondary nameserver. I honestly can't figure out why it's there. Maybe there's a way to configure nameservers with a shorter time-out than the whole DNS resolution process, I dunno."
You appear to be referring to nameservers specified in /etc/resolv.conf to be used by a Linux machine.
What you are describing is the GNU Glibc's resolver behaviour of using the defined nameservers sequentially, from the resolv.conf manpage: "If there are multiple servers, the resolver library queries them in the order listed"
Contrast this with the musl libc (used in more niche Linux distributions) resolver which queries all the defined nameserver *in parallel* and uses the 1st reply received: https://wiki.musl-libc.org/functional-differences-from-glibc.html#Name-Resolver/DNS
Label everything!! Front and back! this goes for patch cables too, label each end with the other end termination name, If its NS1 that should go on the end at the patch panel along with another descriptor, I used asset tag numbers. Label power cables, whether they go to mains or UPS.
Funny, I spoke to an ex-colleague recently and he told me about plans to consolidate some switch cabs at the old employer, last night I had a dream they were having issues with slow ping response times, and was looking for the network cables labels ~ Ha!
Even sitting here in my lone office with my single-user PC & multiple extenhsions like printer, USB gadgets etc I have a confusion of cables into my biggest-I-could-get connector - and I have a problem finding what-goes-where when something weird happens - yes that labeling idea would even help ME. Definitely the very best resolution to the perpetual mess the computer age has created !
My then employer moved into nice new premises. The network ports scattered around the place were all nicely labeled -- but the Wayne Carrs who had just moved out had taken their patch panel with them, leaving a mess of dangling unlabeled CAT-x cables in the wiring closet.
Somebody had to buzz all of them to figure out which cable went to which wall port -- an O(n^2) task. Fortunately, n was small enough to be manageable.
The things I scrupulously label at home are wall warts -- volts, amps, and if known, which device it came with. (I've just recently discovered one that's actually center negative! Thank you, Brother -- not. That one got a special bold-face label.)
I feel your pain, centre negative barrel connectors!!! And so many times in this case there is no indication with the correct +-(-) label on the equipment. For some reason a lot of pro-audio stuff (guitar pedals, synths,...) are centre negative - I guess if that is common across the industry you work in, then you know to be careful. For some donnated household equipment that I don't have a wallwart for I have to open it up to find the correct polarity. But while I'm in there I'll swap it to positive centre pole, and then clearly mark it. Dancing Santa thanks me, although no one else in the household is impressed at Christmas.
My dad once hired a car, and went to the pub for lunch, as you do in the 1960s. A copper comes in to ask who owns the car registration FYR 35C. My dad thought it sounded familiar having only just signed the hire documents. On closer inspection it was found that the front and rear plates didn't match. The front was EYR 35C.
Our security team forbids labelling servers, should any ne'er-do-wells get in and head straight to the backup server or domain controller.
Process is to look things up in the rack elevation tables, which are kept *pretty* well up to date and have someone flash a light on the thing to be doubly sure. Which sounds ridiculous, but I never trusted labels in the first place so it really just eliminates step one of three in identification with the other two steps above now being canon.
On the eve of Y2K I was told to "Shut everything down" so I did.
Ten minutes later the 'phone rang and a highly agitated voice asked me why the routers were not on line. I told them that they had been shut down as instructed. This did not go over well and I was told to restart them immediately as it was causing chaos on the network. I did so and thought no more about it.
The following day I was summoned to a telephone conference to explain why I had shut the routers down when I should not have. I explained that I had followed the instructions in the e-mail I had received, and kept, which it said to shut everything down, no exceptions. A lot of muttering and distant shouts followed then I was told that the instructions had been wrong and that I was in the clear.
A potentially nasty business and I was glad that I had hung on to that e-mail.
In this day and age, you should NEVER delete any email (that's not spam/junk/etc.). Archive it of course, but hold on to everything. Nothing will save your ass like written instructions...
If you are certain it's something that will come back to bite you in the ass (e.g. you're being asked to do something you know will cause a massive fail, or is potentially illegal/immoral), then also keep hard copies of the entire correspondence chain, especially where you asked for confirmation, and put down your reasons why you think this would be a bad idea to do. Keep it, and keep it safe.
Nothing will be able to protect you like plunking down a few pages of A4, showing the full timeline leading up to the cockup, and your repeated attempts to prevent in the first place...
User: "Can you give me access to <some random network resource>?"
Me: "Sure, no problem, send an email to IT support and cc your manager, if they confirm it, I can provide access."
(Me is Head of IT and I also drum that into each of my admins, no changes to running systems without written confirmation.)
Aboslutely.
Back in my (civil engineering) consulting days, one of our clients hired a new project engineer ("Jeff"), who asked for an odd scenario to be modeled as part of a remediation project mandated by regulatory agencies. I pointed out it was non-trivial, unlikely to be useful for the core deliverables, and wouldn't leave much wiggle in the budget or timeline set by regulators. I was told "it's fine, I've got approval." There was some extra budget for experimentation, so it was within Jeff's purvue.
Second time it happened with an even bigger odd-ball request, clearly out of scope, I cc'd my supervisor & the senior client engineer ("Gary") in the email that explained it was well outside of scope and was going to blow the budget and timeline. Was told by Gary to "do whatever Jeff wants, he's dealing with regulators" and a "yep, what he said" from my supervisor.
When the regulator due-date hit, the sewage literally hit the impeller. (Sewer modeling project). I got called into a very loud conference room with a very angry supervisor and a livid Gary whose budget was gone, had no usable deliverables to give the regulators.
When they paused to breathe, I told them the dates & times of the emails where I had pointed out this was going to happen and how they had all approved out-of-scope work and told me not to worry about the timeline. I handed over the hardcopy email chain I'd had sitting at my desk.
Gary read it, went through various shades of red, then sighed, sat down and said "well, (excrement). Ok, Whats the fastest you can get me the core deliverables?" Jeff was cut completely out of the loop. From that part on, Gary was pretty easy to work with.
My supervisor, however, was still salty, thinking this made us loook bad. I took it as a sign the management didn't have my back and escaped. Given how they waited to tell several of my clients that I was leaving until the last minute despite having a full month of notice, I was un-surprised when the company started closing offices months after I left.
> In this day and age, you should NEVER delete any email
Figured that one decades ago (Compuserve sysop). While there have been a few email purges over time (I once had a 2 MegaByte quota; now that's not a good cat-pic), I still retain work emails from 2007 in case some chit from my finally years runs into a fan again. (Sad: two of those oldest emails are from people who died since; another is a once rising star who seems to have burnt-out.)
Many moons ago in a previous life the company used MS Communicator as chat. I found a donateware utility that automatically saved all conversations when you closed the window. I said I to myself I would throw the guy $5 every time it saved my arse. I was still chucking money at him a decade later when conversations from early days of projects were extremely helpful in detailing who said what and when. It is probably the most expensive bit of software I have ever bought/paid for. Worth every penny though!
I learnt the hard way that good server naming requires intelligence and not just a pattern. I learnt this when the only difference between the 'nix server I was meant to be reinstalling on was only 1 character different to the live production one - and that character was "c" on one/"e" on the other. When your main clue is the server name at the command prompt, well the human brain can easilly see what it expects to see rather than what's actually there...
A also learnt that day the joys of colour coding your Putty sessions, so anything done on the production server was done in bright red text/orange on DR/green on Dev!
At my last place the server names were made up of a mnemonic for the location and a three digit code (latterly 4 digits..). And they shuffled them every 12 months or so changing the numbers. It was hard enough to remember what to log into on a Monday morning much less where the services ran. The services were named after sea creatures (because they swam in the stream of data...). Halibut is down ... err, what does halibut do again?
Long time ago in another place the servers were named after animals. Elephant failing to mount mouse caused great hilarity.
Names are important for comprehension, as I regularly commented during code review when faced with stunningly uninformative variable names (tmp, temp2, myBanana, newBanana).
A few jobs back and a long time before I was imvolved there was this server called dcom, after providing service for a few years it was replaced by newdcom, which was replaced by (this is where my recollection may be wrong) newnewdcom and then by newdcom2.
The names may have been slightly different but the general idea is the same.
Mind you at current job with have multiple switches called core-stack-1. You need to read into the domain name to verify the site it is installed in.
During my domain migration/acquisition years, I lost count of how many Windows NT domains were named NTDOMAIN, or NT, or DOMAIN, or LAN.
This was extra fun during mergers where you had two with the same name, which happened more than a few times.
"indium" vs. "iridium". The former was a server, the latter my desktop box -- in whose relevant font, the visual difference was very subtle indeed: basically, a pixel's worth of character spacing. (The i's dot was surprisingly unhelpful.)
So obviously colour blind people were not allowed to be server admins at this company.
Green / orange are the same colour to me.
If you are doing that you need to use colour blind safe colours I.e.
white text black background
yellow black
white / blue
or if you put it in programs
msdos command prompt, not power shell as that has hideous colours
terminal emulator standard ish,
WordPerfect 5.1
Just be use you aren’t colour blind doesn’t mean that somebody else isn’t.
We had an intern who was colour blind which made for some interesting spreadsheets. That said, the actual work was excellent so the colour did not have too much of an impact.
I have the opposite: I can see colour so well I have once worked in a colour lab making pigment recipes for ABS plastic production. I sometimes have to remind myself that some men are indeed unable to distinguish some colours that for me are totally different.
At my dentist's, there was this one ... hygienist? Dental assistant? Not sure what her usual job was, but when it came time to do a colour match for a crown, my dentist would always call her in. I guess she had a particularly good eye for it.
I'm not surprised, for two reasons:
- there are FAR fewer women with colourblindness
- it takes time to learn colour matching. Not only do you have to develop the 'sight' for it, it also depends on what pigments and colouring you have available so doing that often does make you better.
Some of them are so good it makes your jaw drop, which in your case would have come in handy too :)
Re: females (those with two X chromosomes) having better color resolution.
This is a well known genetic fact. If you want good color sense ask someone who can tell you the difference. I suspect this is the reason that mens clothes are fairly "bland" and similar. Just watch the "red carper show" and who do the commentators point out? It is usually the female, and not the escort.
For server naming, you either choose something random but memorable like "yogi", "booboo" or something locationally and platform descriptive like CA-AIX_04. Which is fine until you have to move your data centre to Alaska.
For shell sessions, I set the prompt colour (zsh helps here) - Red for root, blue for application users, green for personal stuff, magenta for anything else. Fortunately I'm not red/green colourblind.
Destroy server E3E33E; the one right above our main production server E3EE3E.
I also did the fancy Putty colors, great minds and all that.
Had a GPO that set the Windows server background colors based on function, so that anyone RDP'ing to them had a very clear picture of what risk they were operating in.
perform a hard shutdown on a server named "Orion" that could be found in a certain cabinet.
If thats what he did , he's 100% not at fault
As an aside the worst labelling I've ever seen was on a collection of about 50 backup tapes to be rotated around a bunch of servers daily , consisting of hand scrawled unitelligible scribble often in pencil .
runner up is rapidly fading pencil scribble in patch cabs / wall sockets
Back in the Beforetimes when I had a real job, we wanted to organise our lab's liquid Nitrogen tanks what with that Human Tissue and Embryology Act incoming an all that. A database, but a load of paper was the way to go, but the problem we had was getting labels than would survive in liquid Nitrogen. The black permanent marker pens were fine though (Sharpies weren't really in the UK at that point).
I think these days you just buy pre-printed tubes that come with barcodes already printed on them
"Sharpies weren't really in the UK at that point"
I'm 62, in the UK, and I've still never seen one. And I still have to do a mental juggle to figure out what is being referenced when people call a marker/felt tip pen a "Sharpie". Using yje Sharpie brand name as a generic for a marker pen seems to be recent US import into the English language. There's plenty of examples in English of this happening, but it feels as if the brand genericisation of Sharpie for a marker pen has arrived in advance of the ubiquity of the brand itself in this case.
This post has been deleted by its author
Each VLAN has a separate colour assigned to it. Each cable running from a switch or server to a wall port has the appropriate coloured tape wrapped around it (uplinks are a rainbow of fun). Any cable without coloured tape is automatically removed from the switch and patch panel. All non-active switch ports are deactivated and all non-active wall ports are not patched.
They're even worse when building services removes them from office #1, proceeds to demolish office #1 and pulls the cables into new office #2 and reuses the old jacks without any kind of order. Fortunately, getting someone from IT into the weekly site engineering meetings was a godsend and allowed us to stop that kind of torture.
still not a great solution - old embossed dymo labels are still legible after several years, however the adhesion is inconsistent resulting in a pile of perfectly readable labels on the floor (admittedly might be to do with the surface they're being attached to as some are still in place after decades).
unner up is rapidly fading pencil scribble in patch cabs / wall sockets
I've switched to UV markers for domestic use. It remains invisible, and as the markers are used for theft prevention marking (Edding 8280) what you mark stays marked, forever. Which is also an excellent argument NOT tuo use them in some places, btw, handle with care.
It means I can mark things without it being obvious or for someone to feel compelled to remove it - it's nigh invisible on white. You just need a decent UV torch.
Back many moons and orbits around the sun ago, my mom labeled dahlia tubers. These labels were Aluminum tags that were written with a ball point pen. While the ball point ink didn't survive being planted, the impressions on the tag did, even after being buried for 9 months. I suspect that sun wouldn't fade the impressions either. Something to be said for embossed labels, not thermally printed ones.
This post has been deleted by its author
I've mentioned before about our engineer who was sent to a site on 'Bradford Road, Manchester'. It was long before mobile phones and satellite navigation. Unable to find the premises, he eventually phoned home to find out that he should have been sent to 'Manchester Road, Bradford'.
When I first moved to London, I had a flatmate who worked for a firm specialised in setting up exhibits at various conferences and expos.
One day, he picked up a load for a conference in Frankfurt, punched the address in the SatNav, and took off. 14 hours later he arrived in Frankfurt and could find any exhibition centre at the address he'd been given. Stopping at a Petrol Station he asked the attendant for directions to the Exhibition Centre. Cue hysterical laughter, and eventually the attendant pointed and said 600km that way.
My mate had been directed to Frankfurt (Oder) on the border to Poland instead of the to the slightly larger and more central Frankfurt am Main... 6 hours frantic driving later, and a very last minute stand being put together and my mate finally managed to get some rest...
Lesson learned: Always double check the address...
When I worked a couple of trade shows at the Frankfurt am Main Messegelände (Exhibition Centre) in the mid-80s, that would have been a difficult mistake to make. The border crossing into the then East Germany would have been one's first clue...
For those unfamiliar, the Main and Oder are the rivers that the two cities are on. The Main is a tributary of the Rhine. The Oder ultimately empties into the Baltic Sea, and along with one of its own tributaries (the Neisse) essentially forms the border between Germany and Poland.
Or taking someone to Stamford Bridge, when they actually wanted to see Chelsea play at home, as happened in 2008.
https://www.thetimes.com/travel/destinations/uk-travel/england/london-travel/sat-nav-cabbie-takes-earl-spencers-daughter-to-wrong-stamford-bridge-rv6k6mjt30n
Any old iron? Any old iron?
Any, any, any old iron?
You look neat. Talk about a treat!
You look so dapper from your napper to your feet.
Dressed in style, brand-new tile,
And your father's old green tie on.
But I wouldn't give you tuppence for your old watch and chain,
Old iron, old iron."
When I create a new machine I give it a name that never changes, ever. The hardware name.
Said machine will then acquire jobs and with those jobs come names, eg: NameServer1, www, ... One name for each service. A machine can be known by many names.
When those jobs are moved to other machines those names (eg www) move with those jobs.
The machine's hardware name never changes, it is immutable.
Many years ago I played for a scratch rugby team (i.e. invited to play, and when you arrive, you greet the other team members, often for the first time).
Rugby (for our left pondian readers) occasionally involves throwing in a ball from the side of the pitch to between two lines each of members of one team. The skill is for the jumpers in the line to jump in unison with the throw of the Hooker. As we had no time to learn any details it was decided that a call from the Hooker of a city name on the island of Ireland, meant the ball was being thrown to the front of the line and if it was a Welsh city name, then the ball was being thrown to the back.
First lineout, the call was "Bangor" - a city name that matches both criteria.
That reminds me of a campground I once stayed at in Chattanooga, Tennessee, which plays up its location on an (American) Civil War battlefield. As you enter from the east, the side roads to your right have names like Grant, Sherman, Mead, etc. To your left: Lee, Jackson, Bragg... Civil War generals all -- Union ones on the north side, Confederate ones on the south.
This is why you always get more than one identifying label.
Check for the correct name label, then confirm it is the correct server by verifying the service tag as well since they can't be accidentally placed on the wrong server.
The client should always provide both server name and server service tag when creating a ticket, otherwise they can't fault the tech for unplugging the right name-wrong server.
One of my REALLY OLD but super-GREAT managers way back in the day RESERVED one day of the year for our admin team going through our entire VAX-cluster (100+ machines) in Calgary to inspect and ensure every cable and port was labelled with accurate port numbers sorted by function and place, readable English function description with high-contrast laser printed labels! AND we also put fancy metal flashlights into holders bolted to each cabinet along with TWO sets of spare batteries and a pencil/paper/pen/notepad and clipboard so we could write notes and SEE the labels during an emergency!
That inspection and repair/maintenance session usually took around 12 to 14 hours to do and was quite exhausting BUT we did save our soft under-exercised tushes MANY TIMES with that bit of emergency prep! We also ensured that all fire extinguishers, power and cooling systems, networking, communications and storage systems and other sundries were well updated and maintained! During the early 1990's there once was a MASSIVE SNOWSTORM and -45 Celcius temperatures that pretty much brought Calgary to a standstill for a whole week BUT those emergency preparations ensured a SIX SIGMA uptime (99.9999%) for that week with no issues for our remote Petroleum Industry customers who were located around the world!
We were well prepared and even had emergency food, water and bedding for the SysAdmins who had to pretty much stayed the entire week in a reserved "bunking area" in the big commercial tower where the VAX-Cluster was! We all got a sweeeeeeeet bonus on our paycheques for keeping everything up and running that week and even the locally-made Italian dinners, the well-aged Whiskey and Port Wine was company supplied! Our off-time was playing our library of Atari 2600 and Coleco Intellivision games in the office on the 120 inch projector TV in the meeting room during that week of rotating emergency shifts! Hard work but very fun for us young-uns at the time!
Prior Preparation and Proper Planning Prevents Pitiful Performance!
We kept that mantra and it worked out to save us many times!
V
Nope! My stuff ain't bull crappola! Wait until you see the downloadable PDF file for the Systems Architecture Technical Document for the 256-bits wide combined-CPU/GPU/DSP/Vector Array processor coming out I think this coming Tuesday, August 4, 2025! Kind of a shocker from North Canadian Aerospace since the tape-out design is even being released for a PUBLIC download as worldwide, completely free and open source under various open source licence terms!
Just because YOU think its B.S. does NOT make it so! Read'em and weep kiddo!
We got a LOT of people with degrees and experience that are waaaaaaaaay beyond yours doing all this! It also helps I did the design, coding and even the hardware description language encode for the 2D/3D line/curve drawing, RGBA/HSLA/CMYKA/YCbCrA/CIE-XYZA/LABA pixel processing, various hardware-accelerated SOBEL/CANNY edge detection filters, various other 2D-XY and 3D-XYZ hardware 3x3/3x3x3, 5x5/5x5x5, 7x7/7x7x7 and 9x9/9x9x9 convolution Kernels including lo-pass, hi-pass, notch, exclusion/inclusion, and 50 more pre-done convolution kernels for integer, fixed point and floating point values at 8, 16, 32, 64, 128 and 256 bits wide for this super-chip! And I also did the design and coding of synthetic vision systems and Fractal AV CODECs and real-time AV filters at up to 10,000 fps at DCI-4K/8K/16K/32K/64K/128K and megapixel squared resolutions for aerospace imaging!
Not to brag too much kiddo, but I am literally one of the Top-5 graphic programmers IN THE WORLD!
V
This is exactly what God did.
The server that runs Creation was supposed to be updated (we're still on v1.0 release candidate) a long time ago, but due to a mislabelled server cabinet, the beta server receives all updates and DLC (it's now on v80.4 the "everyone now has a six pack" expansion) has humans that live forever, can fly, summon snacks at will and you don't need to find your spectacles to locate Your Goddamn Lost Spectacles.
When down in the shitty dungeon office with no signal and one of the prick perm guys comes down "Who's turned the server off?". We were all contractors. "This server is to never be turned off". Yeah that's fine but you're all unfriendly cunts and never bothered to come and tell any of us that your shitty but "important" server was in our room and needed to always be on.
Cocks.
The culture in IT at the NHS is generally shit with a bunch of back stabbing, blame everyone else cunts.
It happened to me - the console in the secure server room had been left logged in, plus the host names were short and very similar.
I went up to the correct rack, hit the spacebar to wake up the console, found it logged in as root, then happily typed shutdown -r 0 ....
How was I to know that because we were 1 console short it had been connected to the server next door?
Cue a lot of unhappy testers. But thank goodness it was just testing. The software came up fine afterwards.
The principle of "faking it" with a "show switch" has a long history. Sometimes the reporters fix problems as well. This goes back over a century and a half. When the golden spike (yes, it is actually solid gold, I've seen it!) was driven to complete the US trans-continental railroad the dignitary who was supposed to hit the spike missed, but the telegrapher saw the problem and closed the circuit (which the hammer that was to hit the spike was supposed to do) anyway.
So, we all need to accommodate for the foibles of those that (think) they are important. History is on our side.
The dignitary who missed was none other than Leland Stanford ... Probably the best and most historically accurate version of this story was put together by the late Huell Hauser in the 8th season of his TV show California's Gold, archived here:
https://blogs.chapman.edu/huell-howser-archives/2006/01/09/the-lost-golden-spike-californias-gold-8010/
For those who are unaware of Hauser's work: Despite his somewhat unusual presentation style, he's kinda addictive and his ~2,000 videos can become a bit of a rabbit hole. Don't say I didn't warn you.