How can you tell when a user is lying to you?
Easy, their lips are moving.
Friends of friends can be a pain in the nuts when you're in IT: you find yourself invited to someone's house “for a beer” and then somehow end up kneeling in a dark, dusty under-stair cupboard trying to coax a six-year-old router back into life. The only thing potentially more daunting is former employees of suppliers of …
Yes, I emailed The Cloud yesterday to let them know that a user had clicked on a link to their instructions for connecting but thought it was a spoof site because of a spelling error and the screen grabs of How to Connect a Mac under OS X showed something very different to what they saw on their Mac. I saw that the images were from around OS X 10.3/10.4 era - when Powerbooks had separate AirPort cards, and got her connected anyway.
Reply back from The Cloud helpdesk - it's OK, it's the genuine site, but a recent server glitch has meant that some really old pages from backup had been made live by accident.
Really? I thought. I wonder what the newer instructions are. So a quick visit to The WayBackMachine... and the content of the pages hasn't changed since 2011, which is as far back as it went, although some of the window dressing had.
If you're going to bullshit someone, do it to someone who isn't going to check on your story and call out your bullshittery.
This came in useful yesterday.
Bit of personal help for a colleague with her Outlook.com. I do the company IT that I don't outsource (i.e. I fix and de-louse the laptops), but this is personal and I've never used Outlook.com. 'The spam folder is eating everything.
OK, she admitted to me that she'd turned the setting back on that puts everything not in the addressbook into Spam. Even though I turned that off last week. She'd googled it, and apparently decided to try it, but that was admitted upfront.
So I said, have you been right-clicking on the good stuff in the spam box, to tell it that it's not spam. "Oh yes." A quick Google from me later, and it appears that Outlook.com may not recognise moving stuff from Spam to inbox as a "this ain't spam" so when done on the mobile there'll be no effect. Checked, have you done this on desktop as well as phone? Oh yes.
OK, then she says, how do you mark it as not spam again? Right click. I don't seem to be able to do that. So I show here, for some reason the menu disappears really quickly. So she's done it. Then I ask, you know you've been saying you've been doing this - but you've just demonstrated you didn't know how to? Have you been clicking stuff isn't spam.
Yes. Honest. I must have just forgotten how to do it. Really? Oh yes. I pause... Really? Are you sure? Oh yes. I think it took 5 minutes to get an admission - and now the problem I couldn't understand is solved. It was amazing how many bemused queries it took from me though.
I believe in labelling everything as soon as signed-off as working as the way to go as it can be hard to figure out what anything is after any period of time. I usually put tape over unused sockets so it's easier to not mistakenly use a wrong one.
I also draw a diagram of the panel and note which cable goes where and how that cable is labelled. Bung it in a "documents enclosed" sticky-backed envelope and slap that on the unit so it won't get lost and is still likely readable in years to come.
Great idea until...
some over zealous person removes the documents 'because they are a fire hazard'. Probably as a non techie, they are filed straight into the watepaper basket/shredder.
Then the inevitable call comes...
'But I left the documents taped to the top of the Server'.
'What do you mean they aren't there?'
As you look at your filing cabinet and hope that the duplicates you took all those years ago have not been turfed out in one of your periodic spring cleans...
'I may have some copies of them', you say just to buy time.
Two hours later you call back copied of said docs in hand but as the footie/GP is about to start you can't be half arsed to spend the rest of the day being a nice guy
'Sorry, I don't have the copies. They must have been shredded.
As you put the phone down, you hesitate for a second...
Then you walk out of your office thrusting the docs into the shredder on the way.
You think to yourself, 'At least they are now.'
Time for a pint methinks.
"Probably as a non techie, they are filed straight into the watepaper basket/shredder."
At which point you get to charge like a wounded elephant. When they complain, you tell 'em to take it up with whoever threw the documentation out.
I've done this on a couple of occasions. After the second time around that person no longer worked for the company.
I used to be as punctilious as you (I sometimes taped the 'envelope' inside the machine - depending on space. airflow, likely operating temperature) but I got ground down over the years by being forced to support other, less careful, *bastards*'s installations/set ups/godawful splatfests.
I was young and innocent. I miss that.
<sigh>
When I was a young lad, I used to even include a boot floppy I knew worked and had the drivers I needed. All nicely tucked inside a clear plastic folder with the notes I had for the server build, taped to the inside of the case (talking PC towers here).
Every single customer I showed this to gave me an extra few quid for my efforts.
I miss those days.
Labels ? A Shop I worked for, ~8 years ago, had that beat.
Each end of the cable was labeled. Cables followed a specific color-code, maintained across the data-center. And there was a database, that not only listed each specific cable, but what rack, slot, and interface it went to, plus color and length, searchable by ANY of those parameters.
Then the guy who maintained both the cable plant AND the database got laid off (I got laid off in that round as well. . . . ). And things slowly degraded from there.
A year later, they called ALL of us, asking if we wanted our jobs back. We smiled. and quoted a figure. They balked. We continued on in our new jobs.
Two weeks later, they called back. They'd pay what we asked. Too bad, that price expired, and the new price was 10% more. They balked again.
Week after that, call three. No idea how it worked out, I let it go to voicemail. As did pretty much everyone else.
No idea how it turned out. . .
Keith Glass wrote, "Each end of the cable was labeled. Cables followed a specific color-code, maintained across the data-center. And there was a database, that not only listed each specific cable, but what rack, slot, and interface it went to, plus color and length, searchable by ANY of those parameters."
I had a similar setup at a place I worked a few years back. The problem there was more direct, though: our boss would come in and attack the switches like a howler monkey on amphetamines that had just seen its young threatened by the wiring closet. Every label now documented the way it used to be; we had no idea of the actual state of affairs as our boss updating documentation would have been too much like right. I left soon after I observed him pulling a clock down onto some fiber, breaking a handful off at the switch, and then walking away without letting anyone know things were about to go south.
Also Sprach Robert Helpmann: I had a similar setup at a place I worked a few years back. The problem there was more direct, though: our boss would come in and attack the switches like a howler monkey on amphetamines that had just seen its young threatened by the wiring closet. Every label now documented the way it used to be; we had no idea of the actual state of affairs as our boss updating documentation would have been too much like right. I left soon after I observed him pulling a clock down onto some fiber, breaking a handful off at the switch, and then walking away without letting anyone know things were about to go south.
I was lucky. We were pretty much as mission critical as it got, so we were configuration-controlled to an almost anal-retentive level, and our Physical Plant guy had to PERSONALLY sign off on any changes to wiring, etc. We built out that facility from scratch, and actually put in extra work to make SURE it was done right, the first time.
Then our entire crew got laid off (apparently, we were too expensive, and yet not old enough to be protected by the consent order that kept them from laying off anyone 52 or older. . .)
The PFYs who replaced us. . . were apparently like your boss. Which is why they wanted the team back, they were paying massive fines for not maintaining uptime.
Too bad, so sad. . . .
The 'expensive' IT staff always get laid off because they don't seem to be doing anything, and "nothing ever goes wrong anyway".
Then a couple of years later, it is discovered that they were "not doing anything" because "nothing goes wrong", and nothing went wrong because they were good, and this is also why there were "expensive". However the cheaper replacements were not as good, and a lot more is now going wrong...
I have a boss who would walk into our shop and want to know why no one was out working. I'd tell him that if we aren't in the shop, then that means stuff is broken and we are out spending the company's money.
Last week he came into the shop and said he was glad to see us in the shop not spending the company's money.
You can teach an old dog new tricks.
Since we all carry a camera with us everywhere we go, a few pictures of the back of each server, each switch, etc. in the "before" state in the old data center can be worth its weight in gold when it comes to the "after" data center, given the propensity of labels to not be updated when things change, fall off due to the heat inherent in a server cabinet or in transit, etc.
I first recommended this 15 years ago, before cell phones had cameras but digital cameras were readily available. It took a little cajoling, but the guys who executed the move thanked me profusely afterwards. I've since always brought this up, and modern smartphones mean there is no excuse not to do it.
Also, if you label the front and back of the physical kit - hostname, device ID and serial number - it makes working in the racks a lot easier, especially if you are never likely to actually visit the datacentre the kit is hosted in (had to do this with a refresh of new kit last year, and a Brother label printer was ideal for this.)
It also make the datacentre engineers job a lot easier if they ever get an early-hours call to check on or power cycle an unresponsive piece of kit.
Labelling cables is all very well and good, but most people don't/won't bother.
However, if you're about to disconnect everything, to re-assemble somewhere else, perhaps that might be a good time to at least take a photo of the back of the servers?
Otherwise you have to rely on the old methods of, which ports are free of dust, and where the kinks in the cable line up with.
I client I had had been Dymo'ing labels in the "flag" style with a cheap "home and small business" Dymo. After a couple of years wafting in the outflow of the various servers and arrays they'd dry up and fall off. I persuaded him to by a bit more of an industrial wrap around type and had the dubious pleasure of replacing the labels in two data centres. It took weeks and about 20 rolls of tags to do the several thousand cables. Did not enjoy, but the client was happy.
The problem there is the brady wrap around cable labels made especially for this job we sourced fade with time and server room conditions, so in a few years they can become illegible. We labelled every 3 ft and at each end of a cable run, so usually we could pick up the next label along say 3 years down the line and read it but still not ideal. Id still like to find a better solution one day. like you say the dymo tapes spring apart over time and ping off when the adhesive goes bad. Something that lasts 10 years reliably say, in 10 years it'll be well past tech refresh time and the cost long since amortizised and time to redo but it should survive that long at least.
We also made cable schedules detailing source port, source device, position in rack and the same for destination. We printed these out and stuck them inside the door of each cabinet, and kept a copy electronically. I also used to run cables so they could be traced by eye by securing them without crossing where possible into the tray, not just some crappy spaghetti bundle disappearing into a overhead cable tray. They were all custom length with colour schemas and checked on a tdm. Once we had a new manager who in his keeness to sweep clean and cut costs in a "modern way" questioned the repeatability of my fingers and a decent set of crimps and a well used cyclops cutter and bought in a batch of cables, my average failure rate was about 1% and caught on the tdm, we tested the bought in batch and they ran about 12% and quite a lot were under length and didn't fit. Back to buying bulk cable and making it in situ...
I got called back to the last place I helped build out like this 3 times to correct the work of others who came along after me and didnt keep it up and the cable schedules went to crap and failed quality audit, because they valued their uptime with massive sla clauses if it went bad and one of the big sells they had was taking clients round the server room to see the workmanship there.
Yes I was anal about our structured cabling, and it and racking wasnt my area really because my IT background then was dev and security although by education I'm a mechanical engineer so theres some logic about doing physical things with care I suppose and it seemed half the staff thought it was beneath them somehow, but it did give a massive sense of achievement to stand back and see built out floors in a DC all done with precision and pride and working properly, and I've been on many projects since that just slopped things into place with no care at all with loads of problems as a result and it makes me a little bit sad inside. Maybe it was my unicorn project, we all have them, those which you compare every other one to afterwards and they dont measure up.
My diagnostic systems, mostly microwave stochastic cooling chains, probably have long since been recycled at CERN, but I guess the cable guy will still know about all my cables. He had a small engraver/printer that slipped over the cable and permanently stamped them. He then wrote it in a book. All my cables were in the 9-figures, PS-AA & LEP, LEAR. Guess with LHC they'll be up to 15 digits!
"He had a small engraver/printer that slipped over the cable and permanently stamped them."
Those aren't exactly cheap 500+ squid, which is why they're not widespread - and the ones which print onto heatshrink generally mean the tubing needs to be fitted before plugs are crimped on. That's fine in the structured side but not so useful in the patch frame or behind the servers.
"like you say the dymo tapes spring apart over time and ping off when the adhesive goes bad."
The standard brother label adhesive does too, but they make three other types of TZ label (laminated lettering so they _can't_ fade) which are bloody useful:
1: Flexible labelling tape (does what it says, good for sticking directly on fatter cables)
2: Strong adhesive: great for cable flags
3: Security tape: makes it bloody obvious when someone's been fiddling ("Someone" kept pulling ID labels off the front of kit because "it looked untidy" - it looked a hell of a lot less untidy than a big white checkerboard pattern when these ones got pulled off.)
Staedtler lumocolour pens work pretty well on most cat5/6/fibre, as (surprisingly) does a ballpoint pen.
Some time back, as part of a large PBX changeover, people were trying to identify private circuits and unmarked exchange lines on an ancient and messy frame.
Our 'manager' paid a vist with his boss in tow -- and moaned about the time being spent on identifying circuts over a multi-building site.
His boss actually suggested cutting cables to see who complained.
After his ears had stopped ringing from several of us grunts telling him he was a total fuckwit, we asked him which circuits in the hospital (UCH) were going to what critical equipment? and which might be crash lines.
"Something's happened to the digitizer ... it's not working any more and it was fine last week!"
"Have you been playing with anything?"
"No, nothing! And I need to digitize some maps urgently."
<90 mins later on customer premises>
"So, when you said you hadn't been playing, how did the digitizer end up plugged into the mini-DIN audio port instead of the mini-DIN peripheral port next to it?"
"Oh .. err .. umm .. I was reading the manual and it said it supported MIDI so I might have plugged my keyboard in while I was having a beer after work on Friday."
* sigh *
......has got the be the single most annoying phrase to hear in IT.
1. Of Course it was working yesterday, otherwise you would have called me yesterday - you moron.
2. Something working at a given point in time is absolutely zero guarantee that it will continue to work for the remainder of time itself.
Users - it's why we can't have nice things!
1. Of Course it was working yesterday, otherwise you would have called me yesterday - you moron.
Really? It was already "broken" (misconnected, powered off, forgotten about by the system drivers, whatever) since a week or two, but all the jobs since then were totally non-urgent (and thus postponed because, well, non-urgent), but now this one crucial job needs to be done by, oh, yesterday noon and you have to drop whatever it is you're doing no matter its priority and come over right fucking now to fix this shit.
1. Of Course it was working yesterday, otherwise you would have called me yesterday - you moron.
Really? It was already "broken" ... since a week or two, but...
Not just computers. I used to work at a radio station and once got a call about 10pm from the guy whose show was just starting, "half the desk isn't working". He flat refused to transfer to another studio, so I had to go in and try to fix it "live".
Turned out that the twit on air before him had tipped half a pint of cider in the desk, but had soldiered-on rather than calling me out when it happened. Yes, there was a very strict no-food-or-drink policy, yes, it was that person's last live show before leaving.
As it happened, that was the very newest desk in the station and the control surface was just a control surface - a pot, a couple of switches, a fader and a connector. No electronics. All the electronics were in a rack, well away from the cider, which had flowed through the faders and switches, out of the drain holes and all over the "talent"'s trousers.
Conductive plastic faders wash very well under the tap, and I only had to replace a few wipers.
Back on topic, labelling was paramount at the radio station, helped by the copious use of multicore cables that were numbered and coloured as manufactured. I have brought that culture of labelling with me and I rarely travel anywhere without a marker pen :-)
M.
Sorry, but if you don't know what plug goes in where, or if you don't think to take a photo or at least sketch it down at some point, how the hell are you trusted in charge of a business-critical SQL cluster?
I mean, seriously. That was verging on the "This will corrupt the array, do you want to continue" questions and it takes one keypress from there to lose everything (especially the "Do you want to pull the config back off the array"! Wow!).
Hell, I have everything from photos of access control panel internals to ethernet cables in walls, to sketchy notes about red being 12v and brown being 5v and whatever else all over my phone. Let the engineer do their job, photograph it, keep it somewhere safe (i.e. with the rest of your documentation of that system).
I once had an access control system on a door, and it had an emergency exit button and - when I took the front off to see why the door wasn't functioning - realised that the cables were set up completely wrong. Someone had changed something on the system at one point to pinch a cable for an open button and not wired it back correctly. But I had photos from before and after, so it was easy to prove what had happened.
The best user lies:
- "It wasn't me" (yes it was)
- "It was like that when I found it" (no it wasn't).
- "I didn't touch anything" (yes you did).
- "I didn't know that" (yes you did and/or you should have).
- "I don't know what happened" (yes you do).
- "I think I saw person X doing something on it" (no, it was you, and you're trying to spread the blame).
- "I did put in a ticket" (no you didn't).
- "I plugged it all back in" (except for all the critical cables)
- "No, it is definitely switched on" (no, it wasn't)
- "I already tried that" (no you didn't).
- "Well, that's not part of my job to do that" (yes it is, you just don't want it to be).
From experience I have more or less done this and lived to tell the tale...
So I got sent as a junior Unix SA to Milan for a friday/weekend. The jobs was move the 2 Solaris NFS servers from one row to the next. A lovely jaunt and off I go.
I checked and rechecked the labels on the DR server. I even took my packs of WHSmith dot labels in 8 colours to mark the cables and ports on the server. Server at the top of the cab and disks below in a long chain of SCSI chaos. All beautifully labelled.
Shut down the DR server remotely but being old Sun kit it wouldn't power off remotely, so I powered the labelled DR server off. Disassembled it all into a neat pile in the DC.... At which point I rechecked and found that the prod server was dead.
Small panic as I realised that the production NFS sever for a investment bank trading floor was ... on the floor during the trading day. You have never seen an SA move so fast to rebuild a server (at least it was labelled).
Powered it all on. Came up 1st time despite it being red buttoned with live filesystems. Not even a fsck. Went upstairs to check the trading floor. Nothing had even missed a beat. Every trader was out to lunch and their terminals locked. Should have realised that this was Italy, so I did the same.
Post mortem showed that someone had swapped the CPUs of the prod and DR and not the labels, because the prod box wouldn't reboot reliably. Marvellous. Never trust the labels.
Well done !!!
Been there done that.
Amazing how quickly the 'Company Standards' go out the window when someone wants to fix a problem, with a senior manager(s) breathing down their neck. :)
I have walked into a 'Rats nest' of cables that have not been labelled since 'Adam was a lad'.
The only option was to go through the lot overnight and work out the 'Real' configuration.
Un-paid overtime, but better than trying to fix a problem when all systems were live during office hours.
As usual No Thanks for saving their necks in the long run. :)
Eventually, this sort of thing made me leave the ranks and go the Consulatancy route.
Not the same sort of fun but at least you got paid for what you did and stood or fell by 'your' actions.
Still like playing with the kit though !!! :)
Ohh.. that reminds me of a client who had some lovely racked servers (don't remember the brand)which had a nice round button on the front of the server which you could press and it would illuminate an LED on the back ... this was clever as you could make sure you were plugging / unplugging from the right machine in the cabinet.
What was not so clever was, that the nice round button on the front was right next to another nice round button which was the power.
Ooops!
This post has been deleted by its author
Introduced to this 20 years ago in by 1st job...
Sales team, ugent call - keyboard not working and contracts have to be sent in the next hour.
"What seems to be the problem?"
"Keyboard not working, some keysnot responding.."
"Was it working yesterday/this morning?"
"Yes"
"Has anything happened to the keyboard, sat on, bashed, snack spill?"
"Of course not!, why are you asking these questions, just get it fixed..."
I sighed and went to the sales office with spare keyboard...
"Can you show me to the broken machine?"
"This one in the corner..." (No significant activity in the whole area - certainly nothing suggesting near immediate loss of revenue/customers etc)
I pick up the keyboard, it makes a SSHHHH noise as it rises at an angle..
"Are you sure nothing ha been spilled?" I asked while plugging in the spare?
I was sent packing for asking too many questions from the "busy" team...
Back at base I decided to look at the keyboard, undidt the screws and...
About half a jar of instant coffee floods out onto my bench - not just a little spill...
A few months ago... Bunch of people having a meeting. One wants to plug the meeting room TV into his laptop and I get a call that the supplied VGA cable is wrecked. When I get there, I'm shown how one pin in the laptop end is completely mashed flat. He explained carefully to me that the mashed pin was the reason the colours on the screen were all wrong. Remembering these are important people, I kept my mouth shut and dug up another cable, but it was pretty obvious that he'd wrecked the cable himself, jamming it in and shoving harder because it wouldn't fit.
I think they believe that we're gullible idiots. Obviously if the pin was mashed flat in the cable before he got to it, I probably would've got a call from the last person to use it.
Labelling everything is a good idea. Including command line prompts for multi-user systems.
I remember many years back when I was working with Sco Xenix. We had several servers (386 PCs) and from time to time they'd get a bit sluggish then fall over. So us admins were expected to shut them them down cleanly at the first sign of trouble rather than having to rely on fsck. One of the problems though was actually getting logged in because sometimes they didn't even have the resources left to do that.
So this one time we all noticed a problem with one of the servers so I shot off my chair, and dived into the server room. As luck would have it I discovered that the offending machine already had a root prompt sat waiting. So I quickly typed the shutdown command and breathed a sigh of relief. It was short lived though because the next thing I saw was a message 'Connection terminated by remote host' and from the machine behind me came the unmistakable sounds of a a server winding down..
After that day we made sure that every root user included the machine name in its prompt.
Bwhahaha....
I only laugh because I too have been there
Remember years ago after been asked to diagnose a twitchy Solaris machine + software; System was running some mission critical financial app, so I thought to myself, Hey I have an bradn spanking new identical machine sitting right next to it that wasn't in use just yet (and wouldn't even get an install for another week), I will restore the back to THAT machine and start poking around to see why ${mission_critical_server} was having the odd bad hair day...
Nearly had a heart attach when during a reboot of the new "test" machine I saw the host name of said mission critical server fly past on the serial console.... A kind of "Oh shit which machine did I actually reboot? moment" Lucky for me it was the test machine, but still nearly shat myself :)
Take a page out of Dell's (old) book and color code your ports. Purple, green etc -much easier to ask the user if the purple wire is in the purple socket than if the SFP is in the Gi/0/3 port. Just make sure to avoid color-blind combinations like red/green.
Photos are your friend. And do what what old film photographers used to do - take multiples, from multiple angles, because you only get once chance to capture the moment.
Light. The RJ45 connectors with the built in lights on the plugs at each end look like they could be my new best friend. Hopefully the price will drop...
Because you will be the one who ends up needing it. Weather wires or programs, in 6 months you will most likely forget EVERYTHING you knew about whatever system you have. Labels are nice, and be sure to label components as well, then you can ask the flunky nice user to flip the switch on the (no not that one) proper box.
There is a nice article in the Wall Street Journal about family tech support: mom & dad calling offspring to assist in their problems.
We do so much for so little (*SIGH*).
Yes, we call the feature: "This Damn War" ... the expression being that of a weary soldier on the battlefield (in this case, the battlefield of life in IT). Bonus point if anyone can remember an IT publication of the 1990s that had a similarly titled column (and written by some of the same people).
To be fair why were the scsi cards left in there as a beartrap? Because consultants like to breeze in and out leaving loose ends to ensure future work.
Likewise why wasn't it documented/labelled when it was first setup rather than blaming the sucker who ends up having to move something they don't quite know about?
We have a policy here:
We make clear to the users that we have no interest in belittling them or getting them into trouble; We want to get the problem fixed as quickly as we can as we need user honesty to do that. If you call us and, when asked, tell us truthfully what happened/what you did then you will be treated nicely and offered honest advice. In most cases, this takes disciplinary action off the table. If you disrespect us by ignoring that advice or wasting our time by hiding facts you should have known might be relevant, we will be far less understanding and may seek disciplinary resolution for what you did.