I'm curious...
Exactly what crucial financial parameter needs to be read over GPIB? That's more about reading volts and amps from electronic test gear isn't it?
Many a Friday arrives with a feeling that the previous four days of toil occupied more than 96 hours, which is why The Register always marks the day with a new instalment of On Call, our reader-contributed tales of fun times delivering tech support. This week, meet a reader we will Regomize as "Rik" who shared the story of how …
You joke, but if they were trading on the electricity generation market, then measuring the mains frequency could actually be useful. It is the same across the whole grid, and corresponds directly to the difference between generated supply and consumption at that instant in time. Slightly high if there's too much electricity generation, and slightly low if there's not enough electricity generation.
Yes, it can vary, slightly. Even the biggest generators slow down a bit under heavier load until the control circuitry speeds them back up. The larger the generator, the higher the load difference has to be before the frequency varies.
Old electric clocks used to use synchronous motors, and many power plants still have devices meant to make sure the "average frequency difference from nominal" adds up to 0 over a long enough time period (generally 1 day), so the clocks don't lose or gain time.
Based on the difference between power generation and power usage, yes. The leftover comes out of the kinetic energy of the generators spinning (a deficit slows them down, excess speeds them up).
There are control systems that try to keep them as close to 50Hz or 60Hz as possible, but in the short term you get wobbles.
It's not a joke. The stock market is based on amplification of swings so measuring the power consumption of the traders is, in fact, an extremely accurate measure of "market volatility".
Now when we have AI trading for real on the stock exchange things might get better but at the moment it's just "follow the lemmings."
"Exactly what crucial financial parameter needs to be read over GPIB?"
Dunno. Probably something coming off the ticker. Could it be something as simple as the DOW, perhaps?
"That's more about reading volts and amps from electronic test gear isn't it?"
The GPIB, originally known as the HP-IB (Hewlett Packard Interface Bus), was invented to tie automated test equipment together. Later, it was bastardized into a peripheral bus for small computer systems. I've seen it used on everything from Mainframes to the Commodore PET.
When I saw GPIB (aka IEEE 488[1]) mentioned, I was sad that it they hadn't found a PET sitting there - or even a C64 with the plug-in adapter!
Maybe another On Call will turn up such a delight.
[1] also aka some European standard, an IEC something? Could Google, but that is cheating when it is old farts remembering stuff!
A particular factory took delivery of peanuts. A sample was taken and poured into a hopper. This queued the peanuts to drop onto the end of an inclined conveyor belt. The belt had holes just the right size to hold individual peanuts. The belt tipped the peanuts into a chute, that dropped them onto an electronic weighing machine. This had a serial interface to a Commodore Pet. The Pet calculated the weight of each peanut and printed a weight distribution graph. It's probably still working, if they can get spares for the conveyor. It was never re-engineered as it was already perfect. And entertaining.
Wow - that takes me back. My first real job included writing a bunch of HP-IB drivers (inclduing the base HPIB card driver) for different pieces of test equipment at the old (long since bulldozered) BAe/Sperry Gyroscope factory in Bracknell. It was all quite cool ... to test laser gyro Inertial Navigation Systems.
I seem to remember trying not to kill Michael Hesletine who was defence secretary at the time. For some reason the powers that be decided it was a good idea to seat him on our 3 axis test table and pretend that we were moving him. There were so many ways it could have gone wrong ... large test able to move in multiple directions at high speed. Fairly high power lasers.. all driven by software that was still a bit flakey. For safety, we decided it was better to unplug everything and just lie.
Dow Jones used some modified X.25 feed IIRC. All I remember about it is that the library used to convert it into something usable ran on Windows NT, which is the only time I've had to develop on the infernal platform. That particular project is also the source of some embarrassing memories which I think I'll keep to myself.
GPIB can be used for all sort of things - as its name implies, it is a General Purpose Interface Bus.
Printers, plotters, disk drives, laboratory instruments, machine control pretty much anything.
For this application I suspect they were using it to read real time sensor data from a wind vane and anemometer.
Many of their financial decisions being critically dependant upon which way the wind is blowing and how strong.
A computerised version of "holding up a wet finger".
It was reading the water level in an octopus tank, measuring how excited the beastie was.
If an octopus can pick football teams it can manage shares easily[1].
[1] I'm pretty sure there were experiments done with using a monkey and, separately, an octopus to pick shares, which demonstrated that they were actually better than the average human trader, but trying to search on "octopus share picking" gets nowhere useful these days! Can we make it a rule that, when picking a stupid name for your company, you can't use names of real things?
(Icon - sort of looks like an octopus, hiding two of its arms so you can't see which stock names it is typing in)
You see that sort of thing quite a lot, for a couple of reasons.
First, no reputable trader without immense backing - the resources of a financial trading house - would suggest they can outperform random selections in the short term. Second, disreputable traders - the kind who just want to take commissions on trading your money, not caring about your profits, and will use popular news articles to recruit marks - know that they are in a situation where the downside of losing all the imaginary money in the test is no downside at all, because it can be laughed off, whereas the upside (if they 'win') is a big boost to their marketing efforts, so will take relatively high-risk, potentially lucrative positions rather than spreading risk like they should. If they do come out ahead, they get a popular news article 'showing' how good they are.
What things are called doesn't really matter except to remind someone of an already understood conceptual and functional breakdown.
Lacking of conceptual and functional breakdown within the understanding of all parties involved, they're just triggering buzzwords used to avoid Discovery of relevant specifics of the situation.
Ethics are just social trends given false distinction for the convenience of social manipulation.
They change over the years. And across differing mindsets. Their foundation for existence is shared imagination and belief without anything physically tangible to give them authority.
Form of fast talk.
Don't name your new product ${ExistingPoductName}${GenericWordUsedInIT} you bastards. Its not that you can't find it using Google/Bing Search, but they both return hits for all of your other products that contain those two words, you bastards.
Also, as a hint to every company out there, make sure you search the slang dictionaries in all of your markets' languages before you announce a new product line.
In the mid 80s I worked on a system that monitored the water intake from a river run on a Compaq PC (512K RAM and a 30 MB hard disk). One of the inputs was a "fish monitor" which had a tank (more like a cold water tank in a loft) that output a 2 but value
OK
Dead
Hyperactive
Avoiding
I have no idea how it was supposed to work , we just read it.
Or, now I come to think of it whose job it was to feed the fish
Many years ago, I did some documentation for a program used by a financial services company. One of their employees, long since gone, had got a copy of Borland C++ Builder and cobbled together a 'proof of concept' which then turned into a £13M per year business line. He had a lot of domain knowledge but no software skills.
There are a lot of *really* nasty things you can do in C++, if you know what you are doing. Thankfully, his only major sin was a function which was 4000 lines long. Not particularly complicated, just long. It took me three weeks to work out what was going on & then document it.
In the end, we produced documentation which no one read but ticked the box for the regulator's audit.
his only major sin was a function which was 4000 lines long.
I remember looking at C code from a supplier and finding a 1500-line switch{} statement.
Scary, but I have to admit that the code was really rock solid, we made a lot of money from it over the years.
We have quite a large amount of code that is technically perfect:
Short functions
Lambda expressions
<80 character lines
No statement exceeds five lines
Packaged into reusable libraries
It’s a nightmare. Atrocious. Terrible. Unmaintainable. It does mystery stuff that you need to step through, dozens of layers deep, to figure out what it’s doing.
Please, just for once, give me a 15 line function instead of 8 x 2-line functions and properties.
> Packaged into reusable libraries
"Reusable", for a function you only need once. And if you only need it once you don't need to function it. Main cause of functionitis.
> Short functions
causes functionitis too, a known sickness among programmers to pack two lines of code into a five line function just to have a function and save a line in another function.
> <80 character lines
Outdated. Today 160 chars is somewhat the minimum norm. Or is anybody still coding in 640x350?
> No statement exceeds five lines
causes functionitis too. Which results in code which is several times larger than needed.
> It’s a nightmare. Atrocious. Terrible. Unmaintainable.
You forgot to mention recursions where iterative methods would be better. I see that more often than you think, but don't change them unless they bloat.
What is functionitis? Watch this small refactoring example.
Completely. There is an art to it, sometimes it makes sense to break a chunk of code into function blocks just so each thematic block can be managed separately which can aid development and testing. However, converting every single block into a function is moronic at best. If nothing else the code will be more inefficient (the usual "efficiency doesn't matter" idiots will interject here, but efficient code should be a goal, not something that other people do by buying faster hardware) as unnecessary function calls and the compiled boilerplate to manage them no longer needs to happen. Also, if it's just an algorithmic block, that's not used elsewhere, what benefit is there in moving it out to somewhere else? Does a single object really need to consist of 200 private functions just because some luddite "trendy" developer thinks that this is useful? Not really.
To add... brainlessly refactoring code so it no longer handles expected errors and instead just kicks exceptions up the exception handler tree is one of the most useless "modern" implementations that failing/failed developers make. There are quasi-religious wars about this, but exceptions were never designed to just kick unhandled nonsense up the call stack, they were to handle exceptions. The key is in the name "exceptional". Regular failures are to be expected, and handled gracefully, irregular failures are exceptions to this and can legitimately be cascaded up the call stack.
Moronic developers who do no error handling and instead rely on exceptions are why scenarios such as a user input of "X" into a field expecting a number (0-9) causes an exception and therefore a debug screen for the end user. That's not useful for anyone, is often not logged or just discarded (after all, the application executes faster if nothing is ever logged) and why users get frustrated where rather than a clear, definitive reason why something isn't working they might, if they are lucky get a message popping up saying "an error has happened" and have to work out themselves what may possibly have happened wrong. For example, recklessly including spaces in a credit card number because that's how they are presented on the physical card and it makes it easier to type them out more accurately.
There's nothing wrong with refactoring and using exceptions to deal with errors in user input or other incoming data. What is wrong is not putting enough information in the exception for the higher level code to produce a meaningful message. And, let's face it, you can get that sort of behaviour easily enough without needing to use exceptions.
And as for the alternative approaches, when you have your actual code indented 5 levels and the caller needing to deal with magic return values that indicate there's been a problem. Which they don't, they just ignore them and assume all has worked fine.
I much prefer exceptions thank you.
It could be worse…
Macro abuse.
Better still, nested macro abuse!
Trumps function abuse any time, not only has it left me scarred but I’m still acquiring new scars from a particularly enthusiastic practitioner of it right now in spite of the fact that he left the company almost a decade ago…
C++ is not to blame, Bad programming is independent of the language used.
I have seen some incredibly bad examples in all languages including PASCAL, JAVA and ALGOL and OLD (decades old) COBOL. One of the problems is it might of started as good code but over time various developers of different skill levels have patched it and because they may not have understood the original code they end up logically commenting out large blocks of code.
Little things like variable names. I worked on one suit of code where one developer loved using using a single letter followed by a number. The worst combination was l1 (lowercase L), I1 (uppercase i). Why, I have no idea! It may have been that the original font might have made these different.
That box is easier to tick than you'd think. One time, one of my customers had to get some kind of certification from a third-party. That included the user manual for my software. I mistakenly sent them the manual of a different program I make. Nobody noticed. Not the users, not the customer, not the certificator, not the inspector. I only noticed it a couple years later, because I was in their folders troubleshooting something, and the version number I had put in the PDF title looked weird to me.
Other boxes are harder to tick. Another time, I made a bit of software for a device that had to be certified as a whole, hardware, software and all. They had only contacted me extremely late in the cycle, so I had to work my ass off to make the certification deadline. And they failed it because the box was white, while the documentation said it was black.
> his only major sin was a function which was 4000 lines long. Not particularly complicated, just long
In the early '80s, my first job, my first project, I joined a team who had been writing an interpreter for a home-grown Expert System language, in C for the IBM PC[1]. The team had managed to score an early XT, with its massive hard drive, shipped from the US and running off a big old 240V to 110V transformer, safely kept in its cardboard box, plus five dual-floppy units. The compiler and one copy of the full sources were on the XT and after editing on floppy we each took it in turn to copy our changes onto The XT to compile and link.
Anyway, the original authors moved on to pastures new and eventually, after a happy time creating a new text-based UI, I was asked to take a look at the run-time interpreter and make an adjustment to it. This bit of code did all you would expect an XPS language to do, bar the actual parsing of the (simple) text form into an internal binary format: lots of lovely linked lists to represent all the clauses, probability weightings, variables and their associated text prompts. The run-time walked this structure, determining the most useful datum to ask for at each stage, prompting the user for their input and recalculating all the probabilities based upon their answers, backtracking as required and building up another linked list as a trace of progress, so the user can see why he is being asked this question and so on and so forth.
You know exactly where this going.
The interpreter turned out to be one single function. Oh, it would call into some subroutines to do car()/cdr() on the lists, print out text and read the user's reply, but all the meaty stuff was done inside one HUGE 'while' loop, with nested loops of all kinds, breaks in interesting places and switch statements as required.
I honestly have no idea how many lines it was, but in order to figure it out, I took a printout on fanfold, spread it across all the desks I could (working late to grab a fourth desk, IIRC, when that chap sensibly went home), yard long ruler, a pencil and a packet of multicoloured pens. After drawing boxes around the various loops, to highlight how they nested, went back over it to colour the boxes based upon what that code was doing (e.g. binding arguments into parameters when a clause was triggered, unbinding same when that was backtracked over, cascading updates to outcome probabilities, all the usual!).
The modifications were made (I have no recollection now of *what* they were, although I must have understood it at the time) and that printout was covered in many, many scribbles and glorious crossings out as similarly coloured blocks were gathered into their own modules and that function shrank and shrank into something manageable. Thankfully, without breaking anything - well, not *too* badly. Although I never chucked that printout, until I left that job, *just* in case.
[1] wonder if this is now unique enough to identify me and if anyone recalls working with the chap who had a superb head of hair and gorgeously coloured beard! Ah, memories.
We wanted to build a mobile data interface to a big, national and highly secure mainframe system. It worked through terminals or (for the cool kids) a PC with a terminal emulator. Not very mobile. So we built a server that could run multiple terminal emulator sessions but render the results as html. This was fed to the mobile units via an encrypted browser session with session cookies etc. We ran the idea past the lords of the mainframe and they said yes. Turns out they had been looking for a way to let their data fly free for years. Basically, each certified user appeared to log in from a terminal at work and all the previous security protocols applied. But I dearly hope this thing has long been replaced. It proved it could be done securely, but I do hope the lords took the whole interface thing to their hearts and moved it into the data centre and under proper control.
Many years ago, before the LHC, a certain particle lab in Switzerland/France had a machine called the LEP... this friend wrote monitoring & display software (and possibly designed the monitoring hardware, knowing him) for certain machine characteristics. This information was displayed in the control room on big screens. It still is.
/me waves at him if he is reading this, possible but unlikely.
FORTRAN or C? And no update to the graphics API?
Since LEP was physically dismantled to make way for LHC, the hardware side has definitely been replaced. (Unless of course it was actually monitoring the Booster, the PS or the SPS.)
Anyway that was but yesterday, LEP started up as recently as 1989.
LEP was an interesting machine, because it proved that answer was 3 (not 42).
I remember seeing one bit of equipment in CERN for which no details of the monitoring API were available. Monitoring from the control lab was via a video camera aimed at the LCD screen on the box, displayed in a window on the management workstation.
Access control system - unusual manufacturer (South Africa based company), heavy investment on it on site.
Had the need for a firelist. By this time, every member of staff is using the system to tag-in, tag-out, and it's being (somewhat) misused on occasion to prove time and attendance.
We look at the official fire module. It costs a fortune, takes forever to run, has to be manually triggered, and produces print output. Useless to us, especially in a fire.
So I realise that underlying it is an antique Firebird database (which for those who don't know is a bit like SQLite in that it just logs to ordinary filesystem files that you can query with just ordinary file locking).
So I write some SQL and I write some monitor scripts and it basically watches out for the fire alerts (which do trigger a table on the system), builds the list of everyone on-site, and then I sent it to a thermal receipt printer that churns out the whole list in seconds.
It passes initial testing, solves our problem (which wasn't a CRITICAL problem, but it's certainly very useful to know that Jim actually tagged out of the site and so is unlikely to be burning to death in the building).
As with everything - feature creep sets in.
Within a year, the script is running 24/7, the printout separates people by area and creates perforations on the receipt so that each area can be torn off and given to the person with responsibility for that area to check they have everyone, the output includes a "last seen" time for when people tagged in once in the morning but haven't been seen since and cause confusion over whether they actually are in today or not, it stores the logs plus emails the output to a distribution list, lockdown functionality is added, and there's a second identical redundant system set up at the other end of the site to facilitate quick access to it from there, as well as a backup if one fails. The schedule for replacing the paper is incorporated into consumable replacements, etc. etc. And with some tightening, the first people know of a fire alert is actually the printer being half-way through a receipt printout because it actually outperforms the alerts from the system itself (so audio alarms happen AFTER printing starts!).
The system is now so integrated in processes that it basically is the fire rollcall system, and the suppliers who fit our access control try to buy it off me - because their customers are all asking for "this thing I heard about that this other customer of yours has in place".
Then I leave. All dues to the guy who took over from me, he can keep it running. But he's told them a thousand times that when it stops working, it's dead, simple as that. He has no interest in maintaining or supporting it (and I can quite understand why!). They go back to the company who tell them the price of the official firelist module - still got all the same problems (do you want to wait for a laser printer to warm up to print 20+ pages of A4 while the building is burning? Or would you rather grab a till-receipt from a machine that actually BEATS THE FIRE ALARM in churning it out), and it has tripled in price.
Also, they now need to "convert to the web-based version" which means replacing half the controllers and losing all such access to make your own reports (and quite a few other tweaks we used to do as well). So a firelist from the system is now basically impossible, unless you pay for a module that emails it in a single standard format (a big list of names in an A4 PDF) on its own schedule (cloud, remember) and no customisation whatsoever.
To my knowledge, it's still churning along and a vital part of the system. It was about 2-3 days of collective coding, plus two cheap receipt printers off Amazon. Oh, and there are plans to move it to a Raspberry Pi to keep the old desktops it ran on going. Hope that there are no architecture incompatibilities in my code!
Agree a receipt printer is probably the best solution, can be run off batteries and being small don’t take up much desktop/wall space and whilst they can have multiple uses are unlikely to be used for other purposes, so fire list gets queue blocked because that rarely used printer in reception is currently printing someone’s Magnus opus report.
Indeed, I have a bluetooth, battery, portable one in the car for my own purposes.
But these ones were hardwired USB, cheap, simple, fast, plain-text, standard, and can print out hundreds of receipts before they need paper changing.
And not long after I built the system, I bought a box of receipt rolls off eBay so they have enough receipts to last them 10 years (since I made the system), and another 10 years on top.
I'm with you. Even a cheap thermal label printer has a basically zero print time from when it's given it's data, so i'm hardly surprised that it's a few pages in by the time the fire alarm has been told there is a fire in zone 1; and then sets the sounders in zone 1 off. A laser printer will still be warming up by the time you've done two yards of continuous thermal paper, plus inevitably people will nick the A4 out of a laser printer instead of going to get some from wherever it's stored. Nobody is going to nick a roll of continuous thermal paper.
Excellent job btw. ;)
The latency on a thermal printer is really good. The only thing I think could possibly keep up is a line dot matrix printer? They go very fast and I believe have negligible start up time. https://youtu.be/KnPBWru2Ecg
I think thermal printers are almost definitely going to be the winner here because they are mass manufactured and hence cheap. If you had throughput problems keeping up with the amount of sheets you need to print, you could just get several thermal printers and send different sheets to each one in parallel.
My IBM 1403 does about 23 pages (~1400 lines) of 11X14 (132 columns) per minute. Can crank up to over 6 feet per second if the printout contains a lot of blank lines. It can empty a box of fan-fold faster than you can kill the print process. That's a joke (kinda), the print buffer is only 140 characters (of core), so killing the process, if you can, stops the printer pretty quickly.
I don't have much use for it these days, mostly banner printing and old Fortran/COBOL code. I'm going to keep it around anyway, just for the shock value. The silly thing is LOUD at full-chat.
The system is now so integrated in processes that it basically is the fire rollcall system, and the suppliers who fit our access control try to buy it off me - because their customers are all asking for "this thing I heard about that this other customer of yours has in place".
If I was your replacement and your company I'd have a nice chat with that supplier about getting a contract.They can have what you developed (for a nice, ongoing fee) and if they do a good job supporting and developing it further (meeting and/or exceeding your usecase) the use fee for the supplier will be waved and your company would get to use the system, including support from the supplier.
I'm thinking Lee D's company must be in the UK? Because I work for a biggish US employer with close ties to health and public safety, and when we have a fire, there's no list taking. It's just a blistering alarm, some flashing lights, and everyone to the far end of the parking lot to look at each other and be reminded of high school.
A point regarding Firebird, at the filesystem level it operates in the same way as other SQL databases such as Microsoft SQL Server. Access to the data in a Firebird database is through an SQL interpreter, just like MS-SQL.
Were you thinking of the likes of dBase which had a file per table (plus indexes and lock files)? An SQL interpreter could be overlaid on top of these as well, but file access was implemented through co-operative locking of the files.
You still use the home page?
It's easier to use the "Latest News" page. Just scroll down to wherever the last "read" link is then start scrolling back up up looking for and middle-clicking anything of interest till you reach the top. That way you get all sections, properly marked, in chronological order :-)
This post has been deleted by its author
At my former job we had a nagios monitoring system and one guy had written some perl script in 2003 so it could send an SMS alert to the site manger's phone in the event of overheating, as the A/C was flaky and underpowered. Later a 2nd A/C was fitted and all good as either could cope with the heat on its own, except maybe peak summer heat on the old one alone. All quiet on the alert front for many years.
That facility was shut down in 2019 and me and the site manger resurrected the facility at another site a year or two later rather than scrapping it. Eventually fired up the old nagios server and got maybe half of the equipment running to check stuff out. Then a hot day and he got texted!
So script still worked, the bulk SMS company had not buggered their API in 20 years, and somehow, somewhere, there was still enough funds in an account to pay for it!
Later the account was found and updated, and the perl script also bug-fixed to limit any very long messages (as API rejected them) but good to see a job that lasted!
I think my flaky server room temperature monitor failed a while ago - I really should check it and get it working again!
It involved an old laptop with a cheap USB temperature probe, exporting to a file which could then be read and checked by a scheduled job - not exactly elegant but it's saved us a few times and only cost £10.
I'm still having to maintain code that I wrote anything up to seventeen years ago, and some of it is very much of the "why did I do that?" variety. It's still preferable to maintaining code written by other people who are no longer around, though, and at least I've always known what comments are for.
I recently had to fix a bug introduced by my former boss in 2001. His coding style was definitely not friendly to future developers, involving "do while true" blocks with exit statements in them used for flow control, comments that only made sense in the context of other comments elsewhere, and methods containing several thousand lines of code. Modularisation and encapsulation were definitely foreign words to him...
Anon, as this may reveal enough about my identity to those who might be watching to turn around and tell me to get back to work.
> The one that gets me is when you find and fix a long-standing bug in some code and wonder how in hell it managed to keep going in its original state for so many years!
You have to tread carefully around those sorts of bugs, just in case it turns out to be a Schroedinbug - and you have just observed that, unfixed, it can't possibly work. Which collapses its wave function and, through spooky action at a distance, every running copy of that program will suddenly stop working!
I once saw a bug that had gone un-noticed for 40 years.
It was in code reporting sales on a financial product, and basically if it went more than three business days without any sales, it crashed and dropped all transactions for the current year. It just took 40 years for the product to lose popularity, and when it happened, it got debugged by one of our handful of near/post retirement COBOL guys - who'd been trained thirty-five years prior by the author of the bugged code.
I can't claim credit for this, but a friend can. I spent a week once at a holiday camp. On site, there was a dusty old monitor hanging from the ceiling displaying some slightly useful information - the queue length at the swimming pool or something like that. It was just a text window in what looked very much like Windows 3.11. A few weeks later I told my friend who had worked there for a summer job whilst at school 20 years earlier. They said they probably wrote that programme. Part of me wanted to believe that it had been running continuously ever since.
I made a powerpoint schematic of our system in the first weeks I was working here as I kept running into having to communicate things with clients and not having an easy diagram/visual. Over 10 years later variations of that single slide are still used everywhere both in my own company and that of the client that I was communicating with. Nobody has bothered coming up with something more professional looking because invariably they were less easily readable.
I'm writing a documentation wiki and I abandoned all the previous documentation except for reference.
The fibre maps are literal scans of scrappy pencil scribblings over the top of an ancient map (which was made for another purpose).
I took the best vector map I could find, tore it apart with Inkscape, rebuilt it (with the doors where they REALLY are, and things like that), named everything properly, grouped it into individual buildings and produced a bunch of SVG maps - one for each floor, one for each building, whole-site overviews, etc.)
Then I took the whole-site overviews, made it a fixed layer at the back of an SVG with its opacity turned down, and started to overlay CCTV, access control, networking, etc. over the top, one file per system. Every time I find another cupboard that the map says doesn't exist, or another doorway that's just entirely wrong and was bricked up decades ago, I redo the building map, copy it into the overview map, then update the overview map layers in Inkscape on all the others as and when I need to.
Already I've had marketing and the site departments ask for copies of it, because it's the only vaguely-accurate map they have seen. Hell, it's being used to show parking on the visitor sign-in system.
I am now an expert at manipulating SVG with Inkscape, putting them into the documentation, and solving problems with the original mapping and SVG file (P.S. if you want to publish an SVG on a website... remove all clippaths from the XML... you can just delete the tags. Then cleanup the file by adjusting the nodes of lines rather than using clippaths... you can use Inkscape CLIPPING just fine, but purge all clippaths from the SVG... you'll thank me later when it actually renders properly in anything using rSVG, including things like Chrome and most WIki and image-library software).
So much so that I diagrammed out my home solar install in the same fashion, and made something so good that I'm currently looking for a frame to put it in.
First hand experience of why the plans for all the services under our roads shall be labelled “for guidance only”.
The wry laugh I’ve had is that whilst my house is as per the plan and located at its correct real world geolocation (comfirmed by OS surveyors), the road isn’t (think 12 feet), resulting in several boundary disputes between neighbours of the houses between mine and the road, as the builders simply juggled things (eg. detached houses are now semi detached) so the right number of houses were built, but no one updated the plans which went to the Land Registry. Personally, I had little time for my neighbours, as if they had looked at the boundary plan included in their legal purchase agreement and compared it to the real world, the differences were obvious…
One of the boundaries of our land is explicitly marked as "uncertain" by the Land Registry. Once I cleared out the huge pile of rubbish, we seem to have about six feet more land than guessed when we bought the place.
I should probably say hi to the people on the other side of that fence, except it's not entirely clear which house it is. The bindweed seems to have rather taken over.
I have encountered similar discrepancies in both of the houses I have owned. The solution to the first was to get all four interested parties, my neighbour, me, and our two solicitors, together on site and physically measure the position of the boundaries involved. This process held up the sale of my property and the purchase of the next from January to May.
Once having successfully purchased the second property, I went to repair a broken fence, only to find that the measured distance between the two houses was less that shown on my plans (and also those of the neighbouring property) by about a foot. The two plots overlapped on the ground but were adjacent on the plans. Again, the only solution was to get all four parties, including two different solicitors from the previous time, together and physically measure the distances. These revised figures then had to be reported back to the Land Registry before I was given permission to rebuild the fence in the right place.
Mrs Diogenes aunt used to own a house in San Fransisco. When she went to sell, the neighbour raised a boundary dispute, hoping to blackmail her into a quick settlement as they had a buyer, and the boundary dispute was a massive spanner in the works. One characteristic of SFO is that boundaries move due to tremors and movements and survey pegs no longer match their real locations, so the neighbour thought she was on a winner. The aunt and her buyer were in no rush, so the aunt decided to fight it. A year later the neighbour ended up losing and being responsible for paying the 250k in survey and legal fees.
When I started in my current job (just over a quarter of a century ago) I collated the relevant bits of the various PowerPoint presentations that got chucked my way as part of my training into something a bit more cohesive (and a little less repetitive).
Then people saw it and wanted copies, and at last check it's currently seems to be the standard training material for at least one of our platforms, used across 4 continents...
> Rik .. was also asked to look at "the platform hosting a real-time graph used to inform the shift traders." The graph wasn't critical, but sometimes revealed market intelligence that proved extremely profitable and impactful.
Doesn't surprise me, IT is still considered just as important as the catering /s
If you are reading this, you're probably using the hack that I put together in 4.1BSD (now called 4.1aBSD) for part of the TCP/IP stack, to be included in 4.2BSD[0]. It was supposed to be one of those "Just get us through the demo, dammit!" hacks. I got 'er done in a couple days over Christmas/NewYears break in 1981. Virtually every version of TCP/IP since has used it. Not too bad for a quick hack.
[0] Just to cut the usual pack of idiots putting words into my mouth off at the socks, no, I didn't write the whole stack. That's why I said "part of". It is only about 120 lines of C in total.
> “It was supposed to be one of those "Just get us through the demo, dammit!" hacks.”
This specifically and the comments on this article amuse me, having just read the comments about Excel on the article https://www.theregister.com/2023/10/12/excel_anesthetist_recruitment_blunder/
The current code in the one greenhouse that hasn't been updated to AtMega 328 yet is still running software that I started writing about 50 years ago, and finished about 40 years ago[0]. I think I got my money's worth out of the original Z80 & S-100 bus ... that system is now being emulated on a headless Slackware laptop. The code is pretty ugly, especially the very early stuff. Looks like it was written by a kid ... but it works.
[0] Yes. Software that is finished. Really. Running untouched, with no troubles, for around 40 years. It can be done.
Sounds awesome… but also illustrates how broken modern development is. How often do you see things described as “not under active development”? Oh, then we should fork it and rewrite it in Rust and add an Electron front end and an email client!
Or maybe it hasn’t been updated in six years because it’s well-written and complete.
How broken the expectations of modern software users are.
I get fed up with coming across libraries where people are calling it "abandonware" and warning people not to use it, because it must be broken and a security risk - solely on the grounds that it has not been changed in a while. But there are no outstanding bug reports in the repo, it is written in plain and simple C of the style that every extant (compliant) compiler can eat and it Just Works!
One that sticks in my mind was for the Expat XML parser, which was in just that state at the time. And it is not as if Expat is without any tests and is only used by half a dozen people who are no good at reporting bugs!
Somewhat related, the bandwagon jump to IJG JPEG library, version 7 "because it must be better than v6.1b". No. No, it really wasn't, ditto for v8 (Tom Lane's code was solid, then the project was effectively hijacked by another person, who ran JpegClub and had some weird ideas about what should go into IJG - a story for another comment, if you don't know it).
Many years ago, I spent an afternoon writing a program in Visual BASIC for DOS (a horrible programming language, by comparison with the BBC BASIC I had learned, but it turned on a light in my mind as to why so many people seemed to hate on BASIC) to automate a task that would have taken about half an hour to do by hand.
And even although I would have to do that job rather more than eight times in my career, I still got a bollocking for taking too long.
But my program worked well. Very well. I shew it to my colleagues, and they started using it. They were still using it after I left the company; and for all I know, they might still be using it, if they are still using the same awful proprietary software they used to use, and if it hasn't changed its data format too much in the meantime.
I believe a major reputable organisation was still running some code myself and a colleague wrote in a hurry one night on a server room floor right up to 2019, we wrote the code in 1995. It was interfacing a data stream from RS485 and sending it to an AS400. I think we revised it once in that time when the data stream was moved from the RS485 network to ethernet.
During its lifecycle it was P-V and continued to run, in windows 2000 in its last incarnation, 24/7/52.
It was I think mostly VB6...
I have some tools and stuff I wrote in the mid 90s. Some needed a bit of tweaking for newer machines, others worked fine.
And I'm glad I comment and use sane variable names, as I found a bug in something a few years back and decided to rummage around in the code to see if I could patch it, hoping it wasn't too messy or mangled. First thing I see is a familiar looking header and... my copyright message. Nearly sprayed tea across the room. I wrote this? When?
Got the bug fixed, but I swear to any deity you care to name that I have zero recollection of writing it in the first place. Granted, it probably only took a weekend, but still. I can offer various quotes from "My So-Called Life" (contemporary with the program) so my memory does still (mostly) function...
A couple of years ago, at my previous employer, our customer service manager called me to com look at a problem she was having with some software. She shows me the problem in this software, and I ask bluntly "what the heck is this?". I was about to give her some grief over running unknown/unsupported software on our company network when she told me "you should know, you wrote it".
For the life of me, I didn't remember writing it. I went back to my office, and started looking through a folder with a bunch of really old projects, and sure enough, I found the source code for it. I didn't recognize it at all, even after looking through all of the source.
I've had that experience with a touch of the other comment below, too.
Looking at some old code in a project, I came across something that worked, but could use a touch of modernization, which I began. In the course of doing that I came to a part that was pretty arcane and found myself muttering things like "Who the hell wrote this, anyway?" and "This guy knew his stuff, but his comments are awful".
Needless to say, at some point I came across a copyright block identifying my younger self as the author. I apparently knew things back then which would probably fox me today, and I was also much worse at writing code. :P
1991 - some DOS software to control antennas. Still working and occasionally maintained*, but now running inside dosemu on Linux.
[*] - from time to time some obscure bug is found, most recent was it could not cope with '#' in names as that was stripped out of config file as start-of-line comment but until a few years back nobody had tried a name with that symbol in it.
If we are counting library routines then there is some code from the early 1990s that is still part of "the random collection of stuff that gets used everywhere" and is in at least a couple of programs that were contracted to still be running for a few more years (I've left that job, so they *could* have ripped it out of all the client installs the moment I vanished... but that would be so costly I'm sure someone would have let me know, just to spread the blame!).
2012ish, I think. Baby's first ETL tool that deals with the fact that my employer at the time agreed to take data from half the UK mortgage lending industry - but - did not mandate a standard format for said data.
I have older code that still works, but it's for managing assets in EVE Online and I quit playing that a good while back!
My oldest published Speccy game would be 1986. Very much in use via emulation. Lots of Google hits for it, all I'll tell you is its the one with the XR3 in it.
OMG, WAIT! I just found a playable copy of a 1983 ZX81 game I wrote. 40 YEARS!!!!
In my current gig the oldest code from my fair hand is 17 years old and the system probably has another 3 - 5 years of life left in it despite it being scheduled for demolition, I mean upgrade, for the last 10 or so. Its a bit like Trigger's Broom having been through quite a few iterations.
hm. I wrote a checking app that fit my usage peculiarities for my Palm III in 1999 or so, and it's been ported to Nokia N770 and now my Android phone. I now have an unbroken register of all my financial transactions in a SQLite database back to said 1999.
Does that count?
For code I wrote for companies, no idea. I retired a decade ago. However, I have a convention registration system that I wrote in 2002 (to run on a Dual Opteron 240 running SuSE 9.2) that I'm still maintaining and using for con reg. Now running on a Pi4B-4GB running PiOS. I'll probably update the hardware next year. Not because there's any problem with the current hardware, but some of the features and add-ons for the Pi5 will make for a neater package.
I suspect the authors of the software still in use by GE Canada can trump everyone here: ” Nuke plants to rely on PDP-11 code UNTIL 2050!”
There is tons of PDP-11 code out there driving all kinds of equipment. It's still in use because it works.
Besides, that's not old, it's just 1970s technology. I've got card decks with code I wrote in the '60s that I still use occasionally ... and I've got a couple system test decks that my Father wrote in the early 1950s that have been used occasionally this century. (Iron from that era that actually works is about as rare as hen's teeth ... ).
1991 - only decommissioned in 2018.
A real quick and dirty, "it will only need to be used for 6 months". It generated scripts to be used with an exchange (as in telephone exchange) terminal. A year later I had to change it so that it would run properly in protected mode and discovered lots of null pointers, and a really interesting buffer corruption problem that took months to find (had to replace a macro call with a new line of code), the hunting down of which exposed lots of small memory leaks.
I designed a proof-of-concept system to graphically display the running status of various machines in a print works complex. It was done entirely in Pygame.
I/O was handled by a bunch of (original 2nd gen.) Arduinos communicating with the computer over USB serial. I don't know if it's still the case but older Arduinos could freeze for no apparent reason, so the printer port on the computer was used to force a reboot if one stopped communicating.
The factory people liked it so much they wanted no further development and paid my employer for it as it was. it was still running some 10 years later when I retired.
I wrote a couple official unofficial apps once upon a time.
Was working in a department of 6 people total, only 2.5 (one part-timer) of us handling the day-to-day workload of supporting a couple hundred engineers and chemists at a life sciences company, processing SAP change requests that could range anywhere from a few dozen objects to well over a hundred, per change request. Part of the process involved sending reports to other departments about what was changing. Naturally, each department cared about different things, which involved a lot of copying and pasting between spreadsheets into emails.
Naturally, the company just expected us to be able to do this all manually. I was the most recent full headcount, and from what I gather, it was an effort of a couple years to get approval for that, so them providing us with any kind of proper reporting tools was basically met with hysterical laughter, and if we asked the IT department to do an estimate for coming up with a solution they came back with some ridiculous 6-figure quote. The systems were so locked down that you couldn't really install anything.
So, I started cobbling together some makeshift solutions using VBA. Started off with a real basic macro for Outlook that'd ask yes/no questions and create a simple email template based on your answers. Eventually, I added more and more functionality to it. Then I started hitting limits of what you can do with VBA in Outlook thanks to the era of the email worm in the 90s, so I shifted over to Excel. Long story short, it dawned on me one day that I had basically recreated a very simple database, in Excel, using VBA. The company had a site license for Office, but you had to specifically request Access be installed, which is why I initially went with Excel. Eventually, I just forced everyone to download Access where I could dump all but a couple dozen lines of VBA code to create emails and attach copies of reports. Only very occasionally still talk to any of my coworkers from that time, but so far as I know, that official unofficial app was still in use at least a couple years after I left. The process may have changed since then, and the only other person in the department who had any hope of reverse engineering what I did isn't there any longer.
At my previous employer we had a database view a previous longtime employee had written to allow looking up accounts in a test database, it was to help testers find accounts easily for whatever test case they had to run.
It was a good view, but very poor performance over time as test environments got larger, and was a no-go on the anonymosed copy of production.
So I set about rewriting it as a process, that can rebuild a custom table that mapped out test accounts. What started out as a 1 day hack turned into 4 week side project.
Inevitable feature creep kicked in and more flags were added, then I also put in some automated steps to allocate accounts to different teams, with configurable rules others could manage, so people don't trip over using the same data during test cycles.
Performance was phenomenal: 2-3 mins to rebuild and mark up data in a regular test environment and less than 4 hours to summarise a production sized environment. And I bet those timings got quicker when it moved onto newer, faster hardware.
That process is a young un compared to other examples here. I handed that code over to other colleagues to maintain when I switched teams, but they still used it for the 5 years I was still there, and it's been a year since I left. But the code works so well it could be used for at least another 5-10 years.
It's not hard to put in extra rules for flagging accounts, and there were plenty of custom columns added for retrieving additional data in future.
Or maybe they scrapped it as soon as I left and haven't told me!
Around then I was contracting for BT City Business Products, writing code to copy data from a single Reuters feed to multiple terminals across a trading floor.
As it was in C and used sockets I wouldn't be surprised if it was still in use today, in some form.
I always wondered if it was strictly legal, copyright-wise.....
Honestly is anyone surprised by any of this?
Remember that little thing called the Y2K bug that was only fixed because companies had no choice?
The two things that come to mind.
1) There is no such thing as a temporary fix, only a permanent fix that's replaced by another permanent fix.
2) Proof of concepts are final solutions until they need to be expanded with more features.
"Remember that little thing called the Y2K bug that was only fixed because companies had no choice?"
It was only a bug in the minds of management.
In the 2 years leading up to 2000, I got paid an awful lot of money re-certifying stuff that I had already certified to be Y2K compliant some 10-20 years earlier. Same for the embedded guys & gals. By the time 2000 came around, most of the hard work was close to a decade in the past ... the re-certification was pure management bullshit, so they could be seen as doing something ... anything! ... useful during the beginning of the dot-bomb bubble.
Look for similar bullshit/misdirection during the end of the first UNIX epoch in 2038 ... despite the fact that all of the important systems that would be affected either already have been, or can easily be modified, making to so-called"problem" non-existent.
May be true of some orgs, but I spent most of 1998 working at a finance org fixing 1970s code with 2-digit years that was breaking every day. Along with 70 contractors, a software house in India and an automated tool scanning the code. There were a lot of dates to fix.
AC because they wouldn't want to be identified.
Surprisingly, though, 32-bit systems could continue to be a big issue here.
I remember reading a few years back how stuff had been put in the Linux kernel in the late 1990s to handle clock rollover (i.e. attempt to handle clock rollover in 2038)... then when it was reviewed 5 or so years ago, it turned out these fixes were totally ineffective. (This is at a low-level, as the clock would get within that last moment before clock rollover, the CPU scheduler, timer-driven stuff, etc. would all crap out, like "run this a half second from now", then "a half second from now" would never arrive because the clock rolled back over to 0.)
Having some kernel system calls to return 64-bit times instead of 32-bit? And glibc support for userspace to use this? Only put in around the 2018-2020 timeframe. I was a bit surprised by this one, LFS ("Large File System") support was put in a long time ago, allowing access to >2GB files on 32-bit systems by having file system calls with 64-bit size type. I just assumed they did the same for time back then.
I don't expect my desktop/notebook computers to crap out. But I won't be surprised if there isn't all sorts of stuff with Raspberry Pis, 32-bit microcontrollers, etc., that doesn't just freak right out and at least have to be power cycled when the rollover time comes around. I'm thinking also DSL modems, cable modems, wireless access points, a lot of that stuff still uses wheezy old (but fine for what it's doing...) 32-bit CPUs. And often ridiculously out of date kernels ("BSP" -- "Board Support Product", the vendor takes some specific kernel version and fully customizes it to run on the board's hardware, so even if you felt up to comiling an up-to-date kernel for your embedded kit, you don't really have the option to without huge amounts of work porting patches etc. if it's possible at all.)
Surprisingly, though, 32-bit systems could continue to be a big issue here.
One might have imagined that there is no way 32-bit OS would still be used by then, but for some current hardware it will be the same OS in 25 year's time. Also there are oddities, like code I have in dosemu. It needs interrupts but they are only passed though on the 32-bit version as it uses the VM86 instruction, which is not available in 64-bit mode. I checked my code, DOS and 64-bit and it was fine to 2050, but in reality either I need to find an interrupt-passing fix for dosemu on 64-bit by then, undertake a massive re-write to be Linux native (for negligible other advantages, but the disadvantage of having to fix problems caused by the latest systemd or GNOME stupidity), or have been so far retired I won't care.
When I moved in to my flat 25 years seemed an awful long period of time, but that has whooshed by and I might still have to be working by then if I want to afford to live :(
2038 isn't the UNIX 32 bit time roll over, that isn't until 2106. 2038 is when it overflows a signed 32 bit integer and then becomes negative. Some code which checks for differences between clock values will still cope, but plenty of other code is expected to have issues, ranging from displaying dates before 1970 to failing completely.
I am *currently* supporting a system that will fail, and fail hard, if it's still in place in the 2030s, is the absolute #1 moneymaker for the company, and has been for the 20 years it's been running. There's a project going to replace it. The third iteration of the project, actually, with the first over 10 years ago. They keep getting delayed by far-upper management. It's looking like it's going to be delayed at least another year...
(Anon for obvious reasons!)
There's a system I wrote components for in the early 1980s. Then in the 1990s I helped write the specs for its replacement. Recently I heard things that suggest that maybe, just maybe, rather than being a complete rewrite, some of those earlier components might have been ported to that newer system.
Also recently, I hear that a massive contract is being awarded for replacement of that later system. I'm awfully tempted to put some very specific canary terms relating to my 1980s work into my LinkedIn profile, just to see if anyone bites.
AC because I couldn't possibly comment...
That's not how you modernise an old solution running on ancient hardware. As an intern?
You find somebody who can do it outside the company. You agree on a price, get them to submit for the job, split the proceeds, then both of you disappear so there are no comebacks.
You might have to change your name.
Who knows what mysterious parameter was being monitored?
I'm reminded of one of Asimov's Multivac stories. The three controllers get together after Multivac "won the war" to compare notes. One admits that the incoming data was so bad he fudged it a lot, even adding random numbers sometimes. The second says that he had his suspicions about the programming, so he would adjust the output to fix it according to his thoughts. The third, being the top guy, admits that he himself had many doubts about the whole thing and occasionally resorted to the most ancient of computers.
"Heads or tails, gentlemen?"
Place I was at in the early 00s gave everyone MS Access, access. They were allowed to write little apps for their departments as well as do their day job. So one person did for the crem. It was still being used when I left in 2017.
The issue, that they never learnt from. There was no documentation for it. The person wasn't trained and had their own work to do, so nothing was documented. They had long left so the inhouse app devs had to maintain it. They were all getting close to retirement and they were good but didn't fully understand it.
They bought in a new "low code" system that they said was to "Allow each department to make their own apps" it was overly complicated and shit. And again each department had their own work to do so yet again people were creating apps with no documentation.
I'm long gone but same cycle will happen all over again.
Re low code (and no code)
“ Low-code or no-code are methods of designing and developing apps using intuitive drag and drop tools that reduce or eliminate the need for traditional developers who write code.”
[ https://www.sap.com/uk/products/technology-platform/low-code/what-is-low-code-no-code.html - source chosen because of the various comments made about ERP systems and SAP specifically. ]
Wasn’t part of the sales pitch for VisiCalc, 1-2-3, Multiplan etc and more recently Access, Excel, Word macros/VBA etc. their no-code and/or low-code abilities…
Additionally, you say the entire home computer and IBM PC market was based on little or engagement with traditional IT, hence even tools like Turbo Pascal fall into the low-code category…
Whenever I’ve encountered low-code/no-code as part of an IT project, it’s quite common in workflow systems and Business-rule driven applications in general, I have tended to functionally box it and insist the relevant business team/department assign the role of business rule maintainer (whatever title you wish to give it) to member(s) of their team. So for example IT becomes responsible for platform software, user department for their processes etc.
My boring story of the login sniffer program I wrote in college in the 90s. During my computer course we were learning Pascal. I noticed people would turn the PC on, not change to the network drive in DOS and would type login at the C: prompt. So my version of login.exe sat in the root of C: would get the users name and password and save it to assignment.doc because people often left their assignments in root of C:
I'd come back later and grab the assignment.doc
I was amazed it worked as I was never good at programming. I hadn't worked out how to not display the password as they typed it. But most people didn't spot that as they wouldn't be looking at the keyboard. I still remember one of the passwords now, decades later, masterofpuppets
It worked, I was logged in as that user. I smiled and logged out again. Warned my friends it was just a test, its not to be abused, if they get caught with it they no nothing and its their problem. Thankfully I was bunking off sick the day they did get caught with it.
It all kicked off. It highlighted the poor security the IT team were running. But one of the "friends" had also, very weirdly, been creating cartoons with a cartoon app on the PCs that were taking the piss out of the lecturers(he was on that guy)
So all 3 of them got suspended. They all got called in for interviews one by one. 2 of them got expelled and one just a 2 week suspension. They said they were questioned about the login.exe program, that it was a great piece of coding, who wrote it? They must know as it was in Pascal. They denied all knowledge thankfully. And I pointed out "the great bit of coding" was them trying convince them to confess. It wasn't that great. Why? I'd got it from the help file in Pascal. How to read text from the screen and save it to a file :) which we hadn't been taught. We all got called into the hall and a warning went out about the program found on the college computers and the 2 students that had gotten expelled, that it could of been a police matter. I kept my head down after that. But if their security hadn't been so shit it would never have happened.
Eventually we all moved to Win95 so it was no longer relevant however my cousin was at leeds uni training to be a doctor. He said they had the same DOS login, could he have it. I said sure. During that time me and his brother were sending each other a floppy disk each month. One month on his floppy was a hacker magazine, it might of be 2600. In it it had a very simple routine of encryption in Pascal. My coding is/was shit and so is my maths but it was really simple so I used it for the login.exe
All it did was encrypt the assignment.doc so if IT found it, it would look scrambled even if opened in Word or wordperfect. Unlike my earlier version where it was obvious what the assignment.doc was for. It just took for example the letter A that you'd typed, subtract 50 or whatever number I used from the ASCII value and wrote that back to the assignment.doc my separate decrypt program just reversed this.
Never found out if he ever used it at Leeds uni or not. It was the late 90s.
You may hear I went on to be a leet programmer. Nope, I'm still shit. Got bored of programming and went on to be an IT engineer instead. Now wish I'd stuck with the programming. Could of made big bucks converting from COBOL as we were also being taught that.
I found my CD from those days with some of my college work and floppies on it. Sadly not the login.exe program or code. I did however find my lottery program I'd written in Visual Basic and the Pascal version. It had a bug in the installer that I fixed 19 years later. God knows why I hadn't fixed it back then. It was a really simple fix . Odd.
In Win95, you just hit "Cancel" at the login prompt, and it let you in. I'm told that in XP, the login screen was essentially a different desktop, and that a specific person in IT (who told me these things) had a thumbdrive with a program that would autorun and switch windows, neatly bypassing the login prompt if the computer was locked.
Win7 and Win10 seem to be taking security at least a little bit more seriously.
Once upon a time at a little-known aircraft manufacturer, the shop floor techs were having trouble retrieving correct versions of engineering documents needed to perform certain work. The access to which involved looking up the correct document version applicable to the particular airplane on one of several different indexes (depending on airplane id, system name and model). Each of which was hosted in different databases (anything from a CSV to an Oracle database to various IBM products). Then taking this document ID and searching for it on a number of different brands of server (Netware, UNIX or other flavors of server). Without fat-fingering the document id into a search box. A situation which resulted in much consternation on the part of the techs performing the work (not necessarily IT-savvy types) as well as the FAA, who had concerns with the proper performance of the process as well as the numerous unofficial cheat sheets people used to navigate the system.
Having a boss who encouraged us engineers to spend a little time investigating new technologies, I had been messing around with this new thing called "The Web". In a few weeks of spare time, I had managed to cobble together a web interface to all of the various test indexes, document repositories plus all the rules needed to consistently fetch the proper document version. My boss, at a periodic meeting with shop floor management showed them my interface and its ease of use. Just enter the airplane ID and you were presented with a list of systems (power, flight controls, galley, etc.). Click on the desired one (no more fat-fingers) and the proper document popped up in the browser. Management's response:
"We want this in production in two weeks."
Fortunately, all of the development had been done on my Linux desktop, which ported quite easily to a Sun system (NCSA HTTPd).
That code was a proof of concept that ran on the student's desktop. Nonetheless, traders had seen this work, profited from it, and begun to rely on it.
I have been in this industry for a very long time and I have seen this so many times (Including my own shit code). In many cases the original source is well gone and there is no documentation.
The shit hits the fan:
1/ The only H/W that runs this code dies.
2/ When the OS that this 'code' runs on is no longer supported
3/ The only person who knows how to use this stuff retires or leaves (Normally retires)
My small company used to run an ISP business in an asian country. That business eventually had to be shut around 2009 as local telcos took the market, first with DSL then with fiber killing independent players.
Fast forward Sept 2023.. we were doing re-cabling in our small server room and removing old non-running routers, servers and various cables. As I removed an old non-running cisco router from the bottom of the last rack I saw a grey appliance under it which turned out to be an old portmaster.. and though unplugged from the network it was still plugged to the power socket and yep turned on! It's been running 14 years more than it should have!