
Brian may not have been the messiah ...
but he certainly wasn't the naughty boy in this case
Sorry, couldn't resist
Few among us are faultless, but more often than not it's users and managers who are in the wrong when IT goes awry. Which is why each Friday The Register offers a fresh instalment of On Call – the reader-contributed column in which you share stories of being asked to bail out bores, brats and blockheads. This week, meet a …
That's because the users and managers keep their job and are free to mangle things again.
An IT guy who fouls up is more often than not fired from the company, so he doesn't get much chance to foul up again.
Back in the day, as they say, slow WAN links could be a pain in the proverbial.
I inherited a 2mb dedicated internet connection which was supposed to be sufficient for Citrix traffic from two remote locations, all our internet usage and replicating backups to off-site. It was sufficient, just, until the backups kicked in late afternoon so first job was to shift that traffic off to the failover ADSL connection.
One day I'm suddenly inundated with calls about the system going very slow at the remote locations so I logged in to the main firewall and sure enough the WAN connection was being completely swamped. Further investigation showed the excess traffic was coming from an akamai server. Once I located the recipient PC and had strong words with the user concerned we isolated the problem.
He was trying, not unreasonably, to update his TomTom SatNav but there was an issue with the transmissions from the server. The update was split into packets, if the server didn't receive a response for a packet it would send it again, however due to the slow connection very few packets were ever acknowledged in time so the server just kept sending more and more data until the line was totally overloaded. Having killed the update at the user end, and told him not to try again, I blocked the relevant servers on the firewall and eventually the packet flow stopped.
The same user also decided to have the Sky website as his browser homepage so he could keep up with the football news. The multiple videos also caused serious congestion on our connection (something which also occurred on our Parallels RD servers thanks to Microsoft Edge's default home page - easily tweaked with a GPO).
In that scenario I'd say: Everything which is not Shitrix, i.e. the target IP/subnet, gets offloaded to the ADSL line be default. Just a simple change of the routing table. With some adjusted metric settings if you want the normal internet to fall back to the 2 MB line on a ADSL outage.
Ah, that takes me back.
Our company bought a smaller company which had an office and another office+factory darn sarf. It was past Preston, so in the very deep south to us.
We wanted them to be using our ERP system (SCO Openserver based, so that dates it), and installed 64k Kilostream lines to each site. Since our ERP was all text based on "green screen" terminals, it was more than adequate. As a side effect, this also linked the Netware 2 servers they had at each office. It wasn't an intended result, just one of those bonus features. Then the manager down there had the cheek to complain that it was slow accessing files from the other site !
The other thing that the users got was access to our email server - which given I was careful to "educate" anyone setting their signature such that it triggers rich text instead of plain text by default worked well, mostly. One day our email "stopped" - we weren't getting inbound emails. Remember Demon Internet and their dial up service ? Yup, that's all we had available to us, over - you guessed it - dial up ISDN, so 64k again. But Demon had a neat feature, you could send a finger request to the mail server and it would send back a list of queued emails.
It included a 10MByte email and lots of small emails we were waiting for. It also had another 10M email, and another. Users had got so used to emails arriving fairly quickly, when one didn't arrive they asked the sender to resend it, then again, and again. Some of us will remember just how fast transferring 10MBytes of anything was over 64k - for those who aren't used to such things, that's 8kBytes/second, or 480kBytes/minute, or 28.8MBytes/hour, but those are raw bits, add all the IP, TCP, and SMTP overhead and you get down to perhaps 1/2 hour per 10M email. Of course, as these were for a remote user, it would also take another 1/2 hour to get from our server to their client - hence why they declared it as "not arrived". But at least I could delete them from our server once they'd got in and save clogging up the Kilostream line all day.
This post has been deleted by its author
Long ago I worked for an organisation which had three issues: a slow WAN, a junior techie who thought he knew everything, and ill-disciplined users (academics).
WAN was slow; sometimes stopped because junior tech "fixed" it. But I got it stable, and locked junior out of the router, which connected us to head office and through them to the world.
Wonderful - break for xmas. Come back to work to a flurry of messages from head office - why so much data use on xmas day? Quick investigation, look at IP numbers and I knew it was "Ulrich's" machine. What was going on? He decided that on xmas day the WAN would be all his, so he went in to the office and spent all xmas day downloading pr0n. I found 100s of gigabytes of it on server and workstation drives.
He was irreplaceable apparently - so he stayed. Under very strict conditions about hours of access.
Months later I discovered that junior tech who was told to clean up the server drives that Ulrich had used had made many copies of really bad stuff for his mates. He was not irreplaceable in the slightest.
A long while back, I had a user on my network doing office temp work over the summer while a student. He reckoned he was an IT-Whiz, which he may well have been, but he also reckoned he could get up to shenanigans on my network without me knowing about it.
One evening, two weeks into his job, I nuked the GBs of god-knows-I-don't-want-to-know that he'd accumulated off some dodgy P2P service. He'd put it somewhere he thought I wouldn't notice it. I also wiped the software off his PC (which he'd taken to leaving on overnight, I wonder why) and put in some extra, just-for-him, limitations on his account.
He never said a word, and neither did I. But he knew I knew.
The old school way to max out a link for testing / troubleshooting(back in the days when a bonded pair of T1s was hot stuff) was to tftp a router binary image between your router and your upstream ISPs router. That method eliminated any chance of the LAN or a local workstation slowing things down.
I got some nifty software upgrades for our Cisco 2501 and 3640 routers that way, C&W's routers becoming a defacto proto-warez fite.
Nonsense.
It was a C* downloading pR0n. Every time I have to break out a sniffer[0] to analyze slow traffic, it always turns out to be a C* downloading pR0n.
[0] Not a real antique Network General Sniffer, but a laptop with a pile of code I have built/adapted to my needs over the years.
IT guy in Gaul was routinely downloading copies of Linux binaries
I mean .. how many times does a person have to do this? Enough to cause a sustained traffic issue in the office for weeks?
You Linux people are obsessed with downloading it , putting it on a stick , (with the other ones ), and installing it . I get the impression once you've done that the funs over and its time to do same again with a different flavour
Reminds me of a company I contracted for.
Their internet was always 'slow'. Well, yeah, you've got three morons in Sales running file-sharing programs. Go have a talk with them.
The talks never happened and he kept complaining, so eventually I offered him a choice.
1. He could go deal with the morons upstairs.
2. He could stop fucking complaining to me and have the ISP upgrade their service.
3. I could show the CEO the pictures I took that time he misunderstood 'restarting a service' and unplugged everything in the network closet to fix his personal label printer.
He went with #1.
Years ago, I worked at a small main street office with apartments on the floor above. Our PFY rented one of the apartments, and as a perk, we ran a CAT5 cable to his apartment for free Internet access.one night, my pager went off complaining about access issues. MRTG showed out uplink was saturated. Did some digging and found a corresponding amount of traffic originating at the appartment.
I disabled the port, and followed up with the PFY the next morning. Found out that he had limewire or one of the other P2P programs and didn't turn off the option to share his files (or throttle the outgoing data). Much apologizing and promises to fit it followed, so I re-enabled the port.
A couple weeks later, the uplink again saturated, and again the culprit was the drop to the apartment. I applied my favorite bandwidth limiting tool: a pair of sidecutters.
I left the ends of the CAT5 dangling. PFY didn't ask me to reconnect. I don't remember if we ever resumed offering the freebie.
I am not a network admin, but for a brief and trying stretch I was a network admin's boss. A very powerful department at work was unhappy with the network performance, and eventually it turned out that a card at a network center some ways away was flaky. While the ISP was working its way from denial to acceptance, the network admin suggested that we acquire a packet shaper. We did, and marveled at the bandwidth consumed by an iTunes user we knew. And in those days, the Weather Bug was found on many PCs, and tended to be greedy. A word to the iTunes user helped, and we staggered on until the ISP figured it out.
Next time you know a card is flaky but the ISP won't deal with it, here's what you do. Buy an electrical extension cable. Cut the female end off and strip the wores back so you have bare cables, making sure they can't ever touch each other. When you're by yourself, pull the offending cars out, plug the extension cord in, and run the bare ends over the card pins until something good poof. Unplug the extension cord, plug the card back in, replace the card and place a service call. Make a small stink about you've complained for weeks and it finally died. They will come out and replace the card and everything should be good.
Now for the important part. TELL NOBODY!!! Do not brag, own up, offer advice, or anything else that might tell someone you're intentionally destroying equipment. That's how people get caught. If you want to brag, wait a few years and send an On Call to the register. Or wait till you're retired and no longer have to worry about being fired.
The telecom company will not look into it, they'll just send it back to the vendor for repairs or will chuck it into the trash, and the vendor won't do anything more than repair or discard UNLESS someone gives them a reason to look deeper. Looking into things costs money, you see
Haha I used to work for an Apple reseller, and we had a customer who had a mac that had an intermittent fault on the mainboard. When the part was sent off to Apple, they would send it back saying no fault found. Until I brought the Tesla coil in to work one day and fried it properly. No visible scorching or evidence, but it was enough to make the fault reproducible :)
I had several slowness issues at my first job in a college . I wasnt allowed to look at the switches , I has to wait for a "Server Man" from the other site to come and have a look .
Time and time again I heard "Oh lookee , no wonder , its all set to 10mb half duplex , It'll be ok now"
Gee thanks . The folks struggling to get connected for weeks will be glad to hear that. Again .
Users in Singapore were demanding the provider of their 'business critical' SharePoint collaboration site stop throttling the service.
Turns out the whole of Singapore was affected by the 3 sub-sea cables out of 5 serving Vietnam being damaged....
Any diagnostics performed? Nah, just point a finger and scream loudly....
So you're complaining about having to do your job? A user's only responsibility is to reboot their equipment before placing a call, but that never happens until you test clean through the demark or your equipment onsite shows it to be their gear. Anything else is your responsibility to investigate.
I initially had the impression Brian (look on the bright side...) was in wilds of somewhere northwest of Hy Basil but the reference to Gaul probably places him in Albion (or Provincia Britannia.)
Although "Arrêter de faire ça" is probably an anachronism as I don't think the Gauls spoke a lot of French much before the Circus Maximus went down the Cloaca Maximus as it were.
[Shudder] Actually reminded me of the bandwidth×latency measure and avoiding tcp over long pipes when tcp window scaling was pretty new and commonly unimplemented. All pretty horrible really. :(
Acquiring company thought it was a brilliant idea to back haul all the traffic from the west to east coast. Turns out they had a variable speed link. PING -L sets the buffer size and I just might have run a few command windows on a bunch of machines, banging on the Internet gateway across the country. That successfully caused the speed to ratchet up and everyone's Internet got faster.
That was until the day I had left a the machines running overnight. IT noticed the load had not dropped in the off hours and the next morning I got called in to HR to explain.
HR, being non-techies, insisted I was running something malicious. To which I responded, "If PING is such a dangerous command, why is it loaded on all the computers by default?" They took that question to IT, and my disciplinary action changed to 'Well just don't do it again!'
IT did realize the user's frustration was real and turned up the settings on the WAN link.
I'm an embedded type, but I have occasionally been ordered to support web types, and it never goes well.
Back in the early 2000s, when a customer's site was reduced to a crawl, "my" back end was blamed for an astronomical increase in data usage. The web team (a different vendor) didn't bother to ask me to look at it, they went directly to the customer and reported that "the crappy database" was causing problems.
Since (a) it had been running fine previously, and (b) I hadn't even logged into the site, let alone made any updates in over two weeks, I suspected otherwise. When I contacted the web vendor to talk about it, my contact there (a decent enough chap, for a web type) didn't answer. I was informed by the customer that he had in fact left the company, and that his duties were taken over by a new hire.
In a surprising coincidence, his replacement had started only a day or two before the performance problem was reported.
Imagine that.
So before digging into the server side database, I decided, on a whim, to look at the client side HTML for the site.
The old site had workable, but very inefficient code, of the form:
[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\8kb_Image_thumbnail.jpg" width="128" height="64" alt="Image Text" /][/a]
(pseudo-HTML so el Reg's editor will allow it)
The new web designer was aghast that every single image was duplicated, one being the image itself, and another being a thumbnail.
As you can see, this is extremely efficient, when all you have to do is use the "width" and "height" parameters. So, he recoded the thumbnail page accordingly:
[a href="D:\Site\Images\20MB_Image.jpg" target="_blank"][img src="D:\Site\Images\20MB_Image.jpg" width="128" height="64" alt="Image Text" /][/a]]
This way, all of those silly "8kb_xxxxx_thumbnail.jpg" files could be deleted. And were.
This resulted in 1100 image files being reduced to 550, with the added benefit that there was no need to keep all those href and img tags in sync. Just use a single tag, and there was no chance of them getting out of sync!
What the web designer clearly didn't understand was that the raw "20MB" files were named that way because the images were, well, 20MB in size. That's why there was a downsized and compressed thumbnail that the previous web designer had tried (but not always succeeded) to keep under 8KB in size. The site had 500+ such images, split over something like 10 pages, so each page had 50 such thumbnails on average. That was about 400KB of data. That doesn't sound like much in 2024, but in 2002, when most people still had 56kb (or even 28kb) modems, downloading those thumbnails alone took 3-5 seconds. Downloading one of the 20MB files took more than 5 minutes.
The web developer seemed to think that "width" and "height" parameters magically compressed the images, rather downscale them. So his "optimization" resulted in 400KB of thumbnail data being replaced with something like 384MB (raw pictures ranged in size from 500kb to 20MB). And, of course, since he tested it in a local network with 100MB connection rather than a 56kb modem, performance was not an issue for him.
Since the web team had gone directly to the customer with the issue, it was only proper that I do the same. Fortunately for me, if not the web designers, the basics of "the database didn't change, the web site did" was not lost on him.
Unsurprisingly, when the web designers' contract contract ended, I found myself working with a new web design company on the front end.
It still happens. On many of the forums I use the avatar image is served as 18 inches wide, and displayed one inch wide. I used to wonder why on earth the "recent posts" page took ages to display until one of the 18 inch images happened to be the first on the page, andf was briefly displayed 18 inches wide before enough had been downloaded to display it one inch wide.
I keep trying to explain that people should upload their images at the size to be displayed, but it just doesn't get through. "Oh, it's fine on my 4G system with 900M wired data line with no line sharing 200m from the exchange."
I have a display consisting of three projectors side-by-side, letterboxed. The pixel size of the image required is 3,840 x 528 pixels and when external bodies need to provide me with images, that's what I tell them. On the rare occasion my instruction finds someone who vaguely knows what they are doing, I often get an image returned of exactly the correct aspect ratio but a vastly inflated pixel count. 16,000 x 2,200 is the sort of thing.
I think I've worked out what is happening. Users are putting 3,840 "pixels" into Photoshop (or whatever) and it is converting that - at 72ppi, a typical figure for screen resolution - to a canvas size of 53⅓". On exporting the image, Photoshop defaults to "print" rather than "screen" and converts 53⅓" at 300dpi to a file of 16,000 pixels.
Easily sorted, but a needless and wasteful practice.
M.
The latest is people using PNG images instead of JPG... about a 10:1 size ratio for images that look almost identical
And don't get me started on the idiot who discovered there was an animated GIF of the company logo and added it to his email signature... or the one in the shipping dept who modified that and added a ship, truck and plane that sailed/drove/flew across the bottom of the email at intervals...
Heh. I actually remember creating photo gallery pages like that, with thumbnails as clickable links to full-sized images! Back in the days of dial-up, these things mattered. Especially if you were an impoverished activist creating websites for other impoverished activists, in the vain hope of eventually getting noticed by someone who would be willing to offer actual money for typing a few more-than and less-than signs in the right places .....
If you put the thumbnails into a sub-folder of their own, with each one having the same filename within that folder as its full-sized counterpart in the parent, it was even just about manageable -- as long as you didn't find the concept of there being two different files with ostensibly the same name (but in different folders, such as gfx/DCIM0018.jpg and gfx/mini/DCIM0018.jpg) too jarring. And as my dial-up ISP of the time was offering shell access, I was able to do stuff like this:
$ for F in *jpg; do convert -resize 160x120 $F mini/$F && echo "[a href=\"gfx/$F\"][img src=\"gfx/mini/$F\"][/a]"; done
(with actual more-than and less-than signs instead of square brackets, obviously) and save a bunch of fart-arsing around with Paint Shop Pro.
Great days!
40 years ago when telephones were expensive, each office had one telephone per two people.
You were expected to put money in the honesty box to pay for your personal calls.
Every month a listing came round for calls made from each phone. There was one number phoned nearly every day, for about 20 minutes. The person I shared an office with and me, both swore it was not us. (it was when I went to lunch).
To resolve this, we phoned the number, and the xyz building society answered it. My colleague said "Oh hello mum - sorry... I misdialed"
He put a lot of money in the honesty box, and from then on, he did not phone his mother from our office.
> put money in the honesty box to pay for your personal calls.
In days of dial-up, CompuServe had many-many neighborhood modem pools so that it would usually be a local "free" call.
I needed to download half a CD-worth of files, personal need, and did not want to tie-up my home phone. I did it from my office. Because of extensive 'tie-lines' and other telephonic tricks, I had never seen a bill for my work phone.
There WAS a bill but mine rarely amounted to a whole buck. This one raised a few eyebrows. I offered to pay the charges but the admin would not take my money, just tole me never do that again. (Part of what I was downloading was network related, which would eventually obviate the need to suck CI$ libraries all night, but too much to explain to a bookkeeper.)
A cleaner discovered that, while most plebs had international call barring on their phones, managers' phones were not. He started off simply with calls to his family back in the mother country but it ended up with acting as a 'telephone exchange' for others, holding the handsets together for the connection
Around 2002: In a school of 10,000 computers, one of "my" users was cited as a Top Ten data consumer. Not a hi-performance number-buncher, one little PS/2 PC in a small office. NetOps asked me "whazzup"? I said 'Bear' maintains computer labs, probably downloading updates. Yes, even unix updates. NetOps sent me a couple URLs. I said "OMG!! Wow!!" I hadn't realized how naughty-pictures had improved in recent years. (Full color full action large frame was novel at the time.)
We noticed periodically slow WAN performance.
Provider repeatedly said that there was no issue and it was the link saturated.
After a while I noticed there was a pattern to when it occurred and SCCM updates never seemed to arrive on my PC.
Turns out the team who setup SCCM had made a balls up and SCCM was telling all the PCs to download from Windows Update....
Only stupid thing I've come across was 3 days where entire networks were slow as 3 legged blind whippet, everything was just mental slow. Turns out the, all too arrogant network admin had taken a week off after he had been "traffic shaping"! He'd routed 4 offices over the internet backup routers instead of the high speed leased lines. What's worse was that every 6 hours it was like wading through treacle as the offsite backups kicked in for 2-3 hours!
CTO eventually got hold of arrogant network bod at home and told him a taxi would be there in 15 mins to bring him to the office where he would fix the problem. The fix took 15 mins and back normal, then arrogant network bod was given a written warning and told that if any changes where ever made again without paperwork his feet wouldn't touch on his way out.
Lots of airlines sort out their own WAN links into airports they operate from, and for historical reasons (terminal-based applications, minimal web traffic) they used to be low-speed (512kbps-2Mbps typically) high-SLA connections, especially at outstations where airlines only ran a couple of flights a day.
Midway through last decade a major global carrier (who may or may not rhyme with "shittish airways") - who made the genius decision to sack most of their techies and offshore to the subcontinent's favourite TLA outsourcer -decided to replace their ancient but very stable airport application suite and back-end with something entirely new.
Now this programme as a whole made the news due to several high-profile outages that grounded the entire airline. But even outside of those widely reported issues, it was a complete disaster - the maestros of Mumbai had completely neglected to understand the limited bandwidth of the vast majority of the airline's outstations, and had allegedly tried to rectify terrible application performance by having local caches of reference data which updated on application launch.
IIRC, each instance of the application was initially trying to download 150MB+ of data every time it launched. So when an airline agent shows up to start processing passengers it could take upwards of 15 minutes before the application loaded. If multiple users launched the application at the same time, that number headed ever upwards. If someone launched another instance while passengers were being checked in, system performance fell through the floor, as there was no way of QoSing transactional traffic vs startup traffic.
It got so bad at the outstation I worked at that they had to roster a member of staff on an hour and a half before check-in was due to open, solely to launch the application at each check-in desk...
The inevitable initial response from Mumbai was that the connection speed should be increased. After having it pointed out that some of these telcos couldn't provide an upgrade for at least 6 months, and the cost of doing so was going to wipe out a healthy chunk of profit margin on routes to those outstations, said TLA got their arses in gear and managed to somewhat reduce the startup bandwidth load...
Yes, I have. To both. A number of times, surprisingly.
The most extreme example was when a VIP executive was nearly in tears because his laptop didn't work on site. It worked fine in the office, but at the customer site (several flight hours away), it just would not connect to the mothership database, which was necessary for his job function. This was in the early 1990s, before the consumer internet was a thing; we're talking about a high end IBM laptop with a top of the line (for the time) modem.
The IT department tested it, blessed it, and he went to field with it, only to have it fail to connect when in the onsite meeting with the customer. He came back, yelled at IT, they tested, again, decreed it good, again, and he went to the field, only to have it fail in front of the customer. Again.
So, he went back to IT and escalated to the top of the IT food chain. He explained that he had a last-chance meeting with the customer on Monday, and he was flying out on the weekend, so it *had* to be fixed by end of day (this was Friday). The head of IT declared that the problem was the hard drive, and that it would be replaced. He gave it to a subordinate, and the VIP went away.
At 3pm on Friday, he called IT and got dead silence. He called my boss, who told me to look at it. It turned out that the IT dweeb had simply put the laptop on a shelf with a sticky note that said "look at on Monday, first thing", ad left early.
Basically, VIP was being hung out to dry.
So, I, uh, "liberated" the laptop (in violation of company policy) to look at it. The idea that it was a hard drive issue made NO sense, since everything worked in the office, only not on site. I could called the database no problem. I wasn't on site, so I couldn't test, so I called one of our offices overseas, and had them set up a phone redirect to the database in my office. I called the number, and it called back, connecting from overseas. And sure enough, it failed.
I did some digging, and found the issue was the dialing prefix. Basically, it was using the local profile for everything. I set up a remote profile, tried again, and tested it successfully. I called VIP, explained it to him, showed him how to toggle profiles, and gave him my home number.
On Monday morning the IT dweeb noticed the missing laptop, reported it stolen, and filed a formal complaint against me. He reported me to the head of IT, who also filed a second complaint against both me and my manager.
When the VIP returned from his successful trip, with a signed multi-million dollar contract because he'd finally had a working laptop in the field, he was shocked to learn that the IT screwups who'd failed to fix his PC - twice - had formally complained about the guys who had actually fixed his laptop.
The VIP was the boss of the boss of the boss of the department head who was the boss of my VP *and* the IT department VP. As it turned out, one of our two departments was scheduled to be moved to the sub-basement in the coming re-org, and at the time, the odds were 70-30 that it would be my group. After this little escapade, the IT group was sent to the dungeon, instead of us.
I'd call that triumphant.
HR were moaning e-mails in their inbox had "Gone missing". The other engineer started looking at the logs in exchange. During this the HR manager walked over saying "We're not sure what the issue is. I've checked and its not my staff, my staff wouldn't just randomly delete e-mails". The other engineer carried on looking and then she found it. The HR manager had walked away by that point. The logs showed one of his staff members, not even read the e-mails but just delete them. The 365 exchange logs are quite useful for seeing that.
I suggested we point this out, was overuled by are manager. He was a very good manager but this was one time I disagreed with him, but sadly we couldn't have the pleasure of telling the HR manager "No, its not IT, Jane has been deleting them without even reading them, as per these logs". The other engineer agreed with me, she also said we should of pointed this out as that HR department is not only shit, but Jane is fucking awful. Our manager said "We know what happened, we have to leave it at that". He knew very well fuck all would happen to Jane so it was pointless pointing out her constant fuck ups. The other engineer also said "Jane has told me in the past "I don't read the e-mails, I just delete them"".
Was an internet connections salesperson, handling SME accounts. BB package sold is usually between 20/50/100Mbps fibre. Got a call from an angry new customer demanding a refund and will file a report to the authorities for false advertising. We don't make house calls, even for the signing of the contract.
But I happened to be nearby that morning and visited the customer. Shortly after arriving, the whole office was out for lunch, leaving the pissed-off manager and me. I started with my laptop connected by LAN to the ONT and showed show >95% of the bandwidth signed up for. Then the router showed something similar readings.
Naturally, with his permission, we went around the various desktops that weren't screen-locked. Somebody was downloading like 10+ torrents of full episodes of Korean Drama in the background and it started like just before 9 am.
Well. I mentioned that's" illegal stuff downloaded on the corporate pipe. I am obligated to report this to the authorities and the ISP under fair use policy". He offered to immediately withdraw the complaints in writing and send a good comment (which never came). etc etc
Hheheheheh. Not sure if anyone was taken to task for it.