Re: Surprise v. reality
> In the old days, the code had to be tight, efficient and it was thoroughly tested, often by hand (dry-runs), because actually running the code on a mini or mainframe cost a lot of money. That, combined with limited resources, meant that the code was, generally, fairly well written and well tested.
Not entirely sure I'd agree with that.
I mean, I agree to some degree. There's the old article from Joel Spolsky, about how Netscape screwed the pooch by deciding to rewrite their codebase, and failing to realise that much of the "cruft" was actually there for a good reason...
https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
But beyond that...
In the first instance, the era of "running code on a mini or mainframe" was arguably back in the 70s and 80s. But when we got to the 90s - which is over thirty years ago, don't forget[*] - code was being written on Pentium desktops and being rolled out to Dell, HP, SunOS and Irix servers. There was even this upstart called Linux, though you had to download a lot of floppies to get this running on your home PC...
And that's the era where this code comes from. Disk space was no longer as much of an issue - at least for source code files - but memory and CPU time was still expensive, and with the original dotCom bubble, there were a lot of people flooding into the industry who were perhaps more focused on the financial rewards rather than achieving coding excellence.
(I doubt that's the case with this stuff, but there's a reason that the term code-monkeys was invented...)
Beyond that, old code can be efficient, and could be well tested - after all, it's had years for all the edge cases to be explored and fixed. On the other hand, it's far more likely to look like "checksummed line noise" (to quote an old description of Perl code), far less likely to be commented or have documentation, often makes use of implicit/obtuse features of the language - and is likely to take shortcuts in the name of performance.
E.g. see the old story about Mel, in which the eponymous programmer hand crafted incredible code which took maximum advantage of the physical hardware, but which was virtually impossible to maintain.
http://www.catb.org/jargon/html/story-of-mel.html
Beyond that, we're in a very different world these days, even without considering the many security concerns. Data payloads are bigger (I've recently been having fun with running out of memory while decoding JSON files which are several hundred megabytes in size; sadly, the "streaming" JSON decoders I've found and tested are several orders of magnitude slower than the standard system library), and we now need to deal with two-byte UTF-8 characters and the like rather than assuming that everything is plain and simple ASCII. Etc etc etc.
[*] As much as I'd like to believe that it's actually only around 2008 or so...