Never worked on an Amiga, but I thought the "Guru Meditation" error messages had a lot more style than the bland "segmentation fault" messages that drive so many of our students to distraction.
New year, new bug – rivalry between devs led to a deep-code disaster
Welcome, gentle reader, and rejoice, for with the new year comes a new instalment of Who, Me? in which Reg readers recount tales of tech trouble for your edification. This particular Monday meet a programmer we'll Regomize as "Jack" who had a rival we'll call "Irving." The two of them were programmers at a small concern that …
COMMENTS
-
-
Tuesday 9th January 2024 06:37 GMT David 132
Part of the Amiga quirkiness. Early models had various B52s song titles etched onto the motherboards - Rock Lobster and so on - because the designers were fans of that band.
The Guru Meditation was essentially the equivalent of a Windows BSOD - once you saw it, that was it, game over, nothing you could do other than reboot - but yes, it was arguably more stylish.
And due to the lack of hardware memory protection on the Amiga, you'd see the Guru quite frequently.
Later versions of the Kickstart ROM introduced the yellow Guru Meditation, which in theory was recoverable, but in my experience - nah. Just a different tint to accompany your reboot :)
-
-
-
-
Monday 8th January 2024 19:46 GMT David 132
Re: Jack and Irving
But did you also spot the blooper in the article, which erroneously describes the Amiga 2000 as having a 68020?
Probably the author meant A1200. The B2000 had, as standard, the same 68000 as the 1500, 500 and 1000 before it, thanks to Commodore’s glacial pace of hardware development.
-
Tuesday 9th January 2024 21:06 GMT Anonymous Coward
Mea culpa
"But did you also spot the blooper in the article, which erroneously describes the Amiga 2000 as having a 68020?"
I'm the original submitter.
You're right - I misremembered that detail. It must have been machines with A2620 or A2630 accelerator cards, presumably including A2500s, that were able to run the application - but not vanilla A2000s.
In my defence, I haven't touched an Amiga in over 30 years - I had to look up the above details on Wikipedia. Still, I'm guilty, in a very small way, of the same error "Jack" was guilty of - failing to verify assumptions. *sigh*
-
-
-
Monday 8th January 2024 09:32 GMT Bebu
The real lesson...
Don't (re)write C etc code in assembly. While you are are smarter than the compiler the compiler is a plodder (like Irving:) and usually doesn't make alignment and other foolish mistakes. Get a better compiler or pass the suboptimal assembler code emitted by the existing compiler through your own optimisation stage.
Compilers back then did sfa* optimisation so even adding a peep-hole optimiser was a win.
*classical technical term rendered by Terry Pratchett as adamus con flabello+ dulci [Jingo]
+sir Terry was a gentleman.
-
Monday 8th January 2024 09:43 GMT MiguelC
Re: The real lesson...
There are times that the difference in performance matters.
In my graphic computation project in uni (30 years ago....God, I'm old!), I used some embedded assembler code in my Visual C++ project to load a polygon vector file. While most of my colleagues' code took several minutes to load the largest files, mine did it in a couple of seconds - it loaded the first file so quickly the teacher first thought my code just didn't work - he only realised it had done it's thing when I told him to press the menu button to show the 3D object and all worked :)
-
-
-
-
-
Monday 8th January 2024 14:33 GMT heyrick
Re: The real lesson...
C is better in that there's an awful lot of crap that you can dispense with.
You don't need to keep the stack balanced. You don't need to waste time wondering about the best way to optimise branches. You don't need to keep track of registers and, if using named definitions, make sure you're not using the same one for different reasons at the same time. You don't need to worry about stack frames so crash traces make sense. You don't...
Maybe in the '80s and early '90s with slow processors assembler was the best choice.
But these days? Let the compiler deal with all that crap, it's what it's there for.
-
Tuesday 9th January 2024 10:23 GMT Anonymous Coward
Re: The real lesson...
It's impossible for C to be better than assembler for any specific case. When you run C through the compiler, the output IS assembler. So if a human was skilled enough (extremely unlikely now) they could product assembly code that's just as good. (Or indeed if you're skilled enough with your butterfly - 378)
HOWEVER
The overarching benefit of C (or any other high level language) is that it's processor agnostic. Assembly needs to be written for each target architecture, whereas in C the compiler deals with that (mostly)
-
-
-
-
Monday 8th January 2024 21:15 GMT david 12
Re: The real lesson...
Any compliant c compiler will emit crap arithmetic code because the standard demands it: 32 bit arithmetic emits a 32 bit number. If you don't see the problem with that, it's not your area.
And modern c compilers are almost all pretty crap with 8 bit processors: everything gets promoted to 16 bits. That's not a problem an optimising compiler can fix.
-
Tuesday 9th January 2024 04:33 GMT An_Old_Dog
Re: The real lesson...
1. If your program is too slow, don't tweak it with calls to assembler code. Get a better algorithm! (If it's I/O bound, well, you're stuck. Or are you?)
2. Never replace C (or any other higher-level language) with assembler before you've profiled your code. Your "hot spots" rarely are where you think they are.
3a. What sort of lousy assembler lets you accidentally begin assembling an instruction on an illegal address?
3b. If you don't know how to intentionally create an instruction on an illegal address, you should not be professionally programming in assembly language.
-
Tuesday 9th January 2024 15:11 GMT robinsonb5
Re: The real lesson...
> 3a. What sort of lousy assembler lets you accidentally begin assembling an instruction on an illegal address?
Actually neither the 68000 nor 68020 can execute instructions from an odd address - so I suspect what actually happened in the story was an instruction's word- or longword-sized *operand* was on an odd address - which would cause a bus error on 68000 but be OK on 68020. (Probably using move.l to copy the credit string as quickly as possible!)
-
-
-
Monday 8th January 2024 14:16 GMT NXM
Re: The real lesson...
No no no and no again.
I write in assembler because C is too slow and hard to handle, no good at all for real-time stuff. I did an addressable led driver on one of the cheapest processors available which has about 4 instructions to do something, and C would take too long.
-
-
Monday 8th January 2024 18:25 GMT Rich 2
Re: The real lesson...
I once wrote an entire flight simulator control in assembler.
And by “flight simulator”, I mean a real one - you sit (sat) in it, it had hydraulics, and (literally) cost millions
It didn’t feel the least bit masochistic - one of the very beast and fun jobs I’ve ever done
Oh and it worked perfectly. Never failed
-
Monday 8th January 2024 19:34 GMT FIA
Re: The real lesson...
How easy was that codebase to maintain? Especially long term? (I'm working on a code base in a 'dead language' at the moment, the code is very well written, but that's neither here nor there as we can't easily hire developers to maintain it. This is a high level language too, not assembly).
How easy was it for someone else to update it after you've left?
What architecture, does it run reliably on newer versions of the same? How easy is it to port it?
None of these may have been design considerations, but they may later turn out to be issues.
-
-
-
-
Tuesday 9th January 2024 11:45 GMT gnasher729
Re: The real lesson...
I remember about tripling the speed of some video encoder by replacing an assembler function that handled one pixel as fast as possible with C code that used compiler intrinsic for vector operations, and then unrolling a loop eight times. So if you want to optimise speed plus programming effort it’s high level language.
-
Friday 12th January 2024 05:25 GMT swm
Re: The real lesson...
Multics was written in PL/1 as the higher level language gave better control of the system design. It was thought that they would lose a factor of 2 in performance but make it up in cleaner code. What happened was that various constructs in PL/1 were slow so they changed the compiler to optimize the code generator for these cases. This benefited everyone as their code also ran faster.
There was a switch for the compiler to optimize the code but soon it was discovered that the compiler ran faster with the optimize switch because less code was generated so they took the switch out and always optimized.
Higher level languages have come a long way (but the original FORTRAN compiler for the IBM 704 generated code that was quite fast).
-
-
Monday 8th January 2024 10:25 GMT jmch
Re: The real lesson...
I remember a university assignment [redacted] years ago to implement the same sorting algorithm in C and in assembler, to compare the relative speeds. The C code ran in a few seconds. The assembler seemed to me to be already done as soon as I had pressed 'Enter' on the command line invoking it.
(of course it's not to be taken lightly, and there are indeed many things that can go awfully wrong coding in assembly.)
-
-
Monday 8th January 2024 14:52 GMT Gene Cash
Re: The real lesson...
> modern optimisation technique
> compilers have gotten so good
> with modern compilers
May I remind people this was 1985ish which was nearly FORTY years ago?
C compilers back then were mostly described by the technical terms "crap" and "shite"
GCC wasn't even released until 1987, and to quote Wikipedia: "by 1990 GCC supported thirteen computer architectures, was outperforming several vendor compilers, and was used commercially by several companies"
So back then, rewriting C in assembly was a valid path and all the moaning about how stupid that is, are way off target.
-
Monday 8th January 2024 18:15 GMT mattaw2001
Re: The real lesson...
Not only were the compilers crap, the CPU architectures were typically not well matched to c code, the os debugging infrastructure was pitiful, the list goes on....
Although, I would argue modern CPU architectures also don't run c well as at the language level C does not understand multiple actors like dma, multiple cores, etc. All hail our new rust / Go / Java overlords's!
-
-
Tuesday 9th January 2024 22:14 GMT Michael Wojcik
Re: The real lesson...
Compilers back then did sfa* optimisation
The Amiga 2000 was released in 1987, so the events in this story happened no earlier than that. There were certainly optimizing compilers in 1987. Optimizing compilers go back at least as far as IBM FORTRAN II for the 1401. With C compilers, there were optimizing compilers for RISC platforms such as SPARC (Sun's compiler) and the IBM RT (e.g. Hi-C) in 1987, and even some optimizations in compilers for the PC such as Turbo C.
GCC was first released in 1987, and by the end of the year was up to version 1.16 (and supported C++, contra the claims in a widely-reproduced potted history you can find online). I don't have the source for GCC 1.x handy, but I'd be surprised if there wasn't at least some basic optimization — things like constant folding and strength reduction — in it.
That said, it's also known that GCC was optimized only for certain architectures, particularly 68000, prior to the early '90s when interest in using it outside the GNU project really grew (partly in response to Sun's decision to charge for their compiler).
Compilers varied widely in the late '80s and early '90s. Some did a lot of optimization; some did very little; some were good on some ISAs and bad on others (or only supported a single ISA).
-
Wednesday 10th January 2024 12:23 GMT Terje
Re: The real lesson...
What you fail to remember is that during that time era, optimization by the compiler was better then nothing, varied a lot, but was nowhere close to compete with assembler even written by someone not very good at it. and it took a long time for the compilers to catch up to someone good at writing optimized assembler.
-
-
Tuesday 12th March 2024 10:58 GMT ricardian
Re: The real lesson...
Back at work in the 1980s we got one of the first IBM PCs and a C compiler. The C compiler was "Aztec" and it was selected because (allegedly) the CEGB were using it in power station design! Memories of writing batch files to compile, link and create .exe files. And TSR (Terminate & Stay Resident) programs were not unknown.
-
-
Monday 8th January 2024 09:56 GMT Anonymous Custard
Out in the fields
Not only the lowest common denominator of hardware, but also in the range of common usage scenarios.
We had something recently here which when rolled out royally screwed up everyone working in the field.
Everything had been "tested" of course before roll-out, but always from the comfort and safety of the company LAN when sat in a cosy office.
But when run on-site with the customer breathing down your neck and access only via VPN into the mothership network, the less than optimal speed and other under-the-hood differences between the connection methods were enough to completely screw up the remote users.
And given all this was actually aimed primarily at field usage, there were some interesting questions asked at the post mortem as to why it hadn't actually been field-tried before release...
-
-
-
Monday 8th January 2024 16:49 GMT Yet Another Anonymous coward
Re: Out in the fields
We had the opposite, worked in the field but not in the office.
A Bluetooth connection between an oil field data logger and a PDA, with a pairing that was "simplified".
Great in the middle of nowhere with not a cell phone to the horizon. Took it to a trade show with 10,000 visitors, all with multiple Bluetoothy gadgets screaming for attention and our device went to have a little cry in the corner
-
Monday 8th January 2024 21:42 GMT David 132
Re: Out in the fields
Ah, yes. Back in my day working at $LargeTechCorporation, we were frequently called on to do demos of our latest & greatest at CES, Microsoft TechEd and similar all-comers trade shows.
The marketing people could never understand why I always put my foot down with a firm hand :) and insisted on cabled Ethernet for demos wherever possible.
"But the cables look so ugly! With WiFi $latestversion we can get just as much performance and it'll support our work-anywhere, truly-mobile marketing message!"
Yes, Chuckles. And Wifi worked perfectly in the lab.
But in the Moscone Center, or Olympia/ExCel, with hundreds of other companies' demo networks, and (tens of) thousands of attendees' devices... it would always crash and burn. How's your 4K video streaming demo NOW, Mr Marketing Know-it-All?
-
Monday 8th January 2024 22:36 GMT John Brown (no body)
Re: Out in the fields
Oh, yeah, seen that on a smaller scale too. A shared building, and reception used desktops with WiFi, depspite there being Ethernet wall points. Installers set it all up on a Sunday and left. By Monday I was called in to find out why only one of the 3 reception computers was connecting. Turns out it's not two out of three specific devices failing, it's actually whichever one is switched on first that works. A quick WiFi channel scan found about 35[*] wireless access points across at least 15[*] distinct networks, all fighting over the channel space. Putting reception on to yet more WiFi was the final straw. The fix was to patch in the Ethernet ports to the correct switch :-)
Sometimes, WiFi is in use "because cool", not because it's actually required.
* Numbers made up, it was a few years ago now, but whatever the numbers were, they were too big for the available airspace.
-
-
-
-
-
Monday 8th January 2024 10:54 GMT Caver_Dave
Strange story about field testing
I used to work with a multinational team that tested telephone networks in the time of GSM rollout.
They had to drive around the country in a van with multiple handsets mounted on a rack and monitoring the signal reception parameters.
They had to test in Greece, but the Greek speaker was off on some kind of illness leave.
They sent the Chinese speaker as he could at least order what food he liked from the Chinese take-aways for the 2 weeks of his stay.
-
Monday 8th January 2024 11:42 GMT Howard Sway
his name should go first in the About box
This reminds me of the time I worked at a terribly corporate firm with in-house development teams who all did completely their own thing, some of which were led by really bad programmers. On a project to create a suite of windows applications to replace the old terminal based stuff, one guy decided it would be neat to have an autoscrolling "rolling credits" style About box which listed himself as "Lead Programmer and Designer", under our manager's name which came first as "Software Development Manager". Said manager was very pleased when he saw this during development. Unfortunately, the application was an atrocity of highly original and unusable interface design, which crashed many times a day, resulting in massive complaints from the poor users, who had been supplied with a handy list of names for who was responsible and let everybody else in the company know it.
-
Monday 8th January 2024 12:30 GMT imanidiot
Fair enough
Usually with these tales of Who, me? I consider the mistakes and screwups actual mistakes and screwups that anybody could have made and not deserving of termination. This tale however? Jack deserved his firing. The "whose name first" issue was petty in the first place and implementing program breaking (even if accidental) code with a "time trap" for swapping the names was petty in the supreme.
-
-
Monday 8th January 2024 13:19 GMT l8gravely
Re: Amiga pedantry. Sorry.
I can't be arsed to remember, but I think the A2000 could use a 68010 by default? I rmember swapping one into my A1000 at the time, before I then upgraded to an A2500 with a 68020... but the memories are really dimm and I might be just confused. And yup, I'm confused. I must have just had an A2000 which I then upgraded with an 68010 myself. I don't think I ever had a onboard accelerator card with 68020 or 68030 processors, 68881 math coprocessor and MMU chip. But I do remember dropping $800 for a 80gb Quantum 3.5" SCSI drive since I was done with swapping floppies all the time. This during the era of the "stiction" problem with Seagate ST-mumble muble drives which people would have in their Amigas and PCs which would not spin up if you powered them down too long.
Memories!
-
Monday 8th January 2024 14:09 GMT WolfFan
Re: Amiga pedantry. Sorry.
I think that you mean 80 MB, not 80 GB. Gigabyte drives didn't exist yet. Not for desktop machines, anyway. Your friendly neighborhood Big Iron might have a gig or two or three worth of storage, but not a desktop, and most definitely not 80 GB. And it would cost several orders of magnitude more than $800.
Around that time my Mac Plus at home had a 60 MB external SCSI drive. It cost $600. I thought that I would never run out of storage. Earlier this month I got an email attachment bigger than that. Ah, the Daze of Youth!
-
Monday 8th January 2024 16:54 GMT Yet Another Anonymous coward
Re: Amiga pedantry. Sorry.
>a 60 MB external SCSI drive. It cost $600. I thought that I would never run out of storage
When external full height 1Gb SCSI drives dropped to 1000quid we bought one for every workstation. Unlimited storage, never have to copy data on-off tape again!
I just bought a 256Gb SD card for the dashcam, so small I will lose it, for the price of the SCSI terminator
-
Monday 8th January 2024 21:48 GMT David 132
Re: Amiga pedantry. Sorry.
> Gigabyte drives didn't exist yet.
Indeed. I remember when in the 2nd year of University, a friend got a 1GB 3.5" IDE drive for his PC - this would have been around '94.
We were all incredibly impressed at how humungous it was and, of course as is the Sacred Tradition, doubtful about how he'd ever fill it. To quote Blackadder, "More capacious than an elephant's scrotum".
I had a 60MB-ish drive in my Amiga A1200 at the time and thought THAT was pretty huge.
-
-
Monday 8th January 2024 18:56 GMT TFL
Re: Amiga pedantry. Sorry.
The 68010 was a simple drop-in replacement for the original 68000 CPU, though it didn't buy you much. As you note, no MMU, no math co-processor. I'd done the same with my A500.
One company made some neat add-ons, such as an IDE drive controller that would fit in the A500! Little daughter board under the CPU, plugged into the CPU socket, with the IDE ribbon connector beside.
-
-
-
Monday 8th January 2024 14:14 GMT ColinPa
Test on the slowest box
My boss told me about when he was a new grad, when new terminals were being designed with colour and which could draw graphics. All the developers and proper testers got the latest kit for testing the software. He got a machine, as he said, powered by a rubber band. This machine had a problem. It would display, then a second later redisplay the same stuff. Development didn't believe him ( he was only a new grad), until one of the developers was sitting at the terminal for a different problem saw the problem
The root cause was, that the display software was displaying it twice - but because the developers and testers terminal's were so fast - they didn't see it happen.
-
-
Monday 8th January 2024 16:11 GMT Jou (Mxyzptlk)
Re: Test on the slowest box
Citrix should do that. Maybe they would finally go ahead and fix the causality errors on loaded network connections.
Minor variant: You click, move mouse, doubleclick, mode mouse, click. Executed on the remote end: Move mouse, click, move mouse, doubleclick, click.
Major variant: You type, and some keys get lost and others get switched in pairs.
-
-
-
Monday 8th January 2024 22:44 GMT Jou (Mxyzptlk)
Re: Test on the slowest box
Aw, I'm from across the canal, and a bit east... Can someone tell me the joke behind the reference? Something from Eric Morecambe, or something referring to the lively and always happy Morecambe beach? I mean, I am proud to get most Monty Python references, but I don't know any Morecame reference...
-
-
-
Monday 8th January 2024 23:15 GMT vogon00
Re: Test on the slowest box
As I've said before, I used to be professional tester which was enormous fun :-)
Along comes carrier-grade VOIP, and I managed to add an item of test equipment - the Shunra Storm - to the list of project test equipment. At the time, it was the newest and smartest bit of WAN simulation kit available, and allowed all-too-real simulation of WAN Link speed, insertion of jitter, packet duplication and and packet loss (in one or both directions) and other packet-related skullduggery. Sounds hard, but it was comparatively easy to describe the topology to emulate and the packet effect simulations to apply thanks to the then *very* innovative use of VISO as a front end to desribe the topology and desired packet effects.
I cannot over-estimate the importance and impact that device had....and all in a 2U little box. Beers & props to the developers.
Myself and my colleague were either loved or feared by the devs - loved for finding all sorts of 'retry' bugs with the packet loss tricks, and feared due to the latency tests which exposed one or two gaping holes! After a while, the call agent stopped falling over and started to become more resilient, handling some godawful network conditions without falling down in an irrecoverable heap!
If you worked on the XCD5000 and/or the NN 1460 SBC, then we probably know each other:-) Good times.
-
Tuesday 9th January 2024 02:35 GMT watersb
Re: Test on the slowest box
> allowed all-too-real simulation of WAN Link speed
That sounds lovely. I could really have used one of those at $redacted_company_name to demonstrate the 1990s home computing experience that we were building on our Silicon Valley network, one hop away from Mae West in San Jose.
-
Tuesday 9th January 2024 09:16 GMT jake
Re: Test on the slowest box
If you had put out the word, I'd have loaned you a BERT or two of one description or another ... we were (mostly) done with them by 1988.
Had three of them in MaeWest, another four at the Bryant Street CO in Palo Alto, and a couple more at Stanford.
Not my money, I hasten to add ... they were originally BARRNet issue (I think).
-
-
-
-
-
Monday 8th January 2024 16:33 GMT Stuart Castle
I've talked about it before, but where I used to work, we used a custom written Equipment management system to track who had what equipment. This system tracked ad-hoc loans, as well as allowing equipment to be booked. Some of the equipment required a risk assessment to be filled out and approved before booking, and because students on different courses required different equipment, the system only offered equipment available on the courses the student was doing.
I write the booking system, one colleague wrote the backend API it relied on, and the ad-hoc loaning application. A 2nd colleague designed a nice user interface. Web design is not a strong point of mine, and the booking system was to be web based, so my colleague designed a non functional site, and I provided functionality.
Then we got reports that the equipment list page was taking several minutes to load for some students. I knew the reason for this. When the site displayed the page listing the equipment, obviously it needed the list, which was one API call, but it also needed a couple of other bits of info, which the web service providing the data required separate API calls for. As such, each item of equipment needed three extra API calls. Given that some courses required access to over 100 items of equipment, some students bookings could generate >300 API calls. Most of the students were booking equipment on a gigabit network, but given that a lot of students leave their coursework to the last minute, come coursework hand in time, the server could easily be dealing with a couple of hundred students booking the equipment at the same time. So, it could easily be dealing with tens of thousands of API calls, which really slowed it down.
I pointed out the problem, and suggested a solution (add a new API that returned an array consisting of all the equipment registered to the course containing all the information needed), but my colleague swore blind it must be my "iffy" code. I pointed it that the code on that page did nothing but call the APIs, then output the results, but he still didn't believe me. Then I added code to the page on the test server that logged what was being done, when, how long it was taking and the results. It also logged the start and end of each page download. I showed him the log for one of our courses with the most equipment. It was >10 sides of A4 and was mainly the results of calling the same 3 APIs over 100 times on different bits of equipment. He said "Fucking hell.. Let me look into this" and went away with the logs. Within 2 days, I had the API I'd requested, and the load times for the page for the biggest courses went from >10 minutes to < 1.
-
Monday 8th January 2024 22:08 GMT J.G.Harston
That reminds me of the f***wits who don't understand "display sized to X" is not the same as "resize to X".
Last week I got an email with about three lines saying (paraphrased) "Thanks for the update. Bob." I wondered why it took ten seconds to open. I glanced at the inbox listing and wondered why it said (252M) at the end of the line. How TF is a three-line email 252M? Then I thought....uh oh.... There's a little image under the signiture line. Yep. Menu -> Display image. It was three times the width of my monitor, set to display in the email as 160 pixels wide.
-
Tuesday 9th January 2024 00:09 GMT Trixr
What a royal pr*ck of a colleague, though. No-one you work with should make you jump through those hoops, unless you're known to be completely incompetent. Especially something they could have replicated themselves with a test account in conjunction with your initial observations of the problem. Or maybe he was just lazy.
Thankfully I've only encountered one or two colleagues like that in my career. One demanded full logging of a particular problem to "prove" it was not our config (a simple STMP delivery issue where the destination was rejecting our messages due to policy) - never heard anything else after I provided all 10GB of text message log files for that day.
-
Tuesday 9th January 2024 07:51 GMT watersb
> "No-one you work with should make you jump through those hoops, unless you're known to be completely incompetent. Especially something they could have replicated themselves."
This really hit home for me, I have been thinking about this behavior, and I think it's quite a bit more common than we might believe -- especially among the tech savvy.
It comes down to the essential experience of computer vs human: it's quite literally unbelievable how quickly a modest, modern computer can perform the wrong thing, or perhaps the only-incidentally-correct thing. The senior programmer with decades of experience with these beasts is particularly susceptible to this form of blindness.
We should teach the kids at university how to test and measure the execution of existing systems. Yet we still spend all of the time teaching them to create new problems.
There's a proverb in here somewhere.
-
-
-
-
-
Tuesday 9th January 2024 12:19 GMT Doctor Syntax
Re: About box Easter Eggs
I was, I think, one of the guinea pigs to get a system reviewed for security under the BT project (the name escapes me) following on from Prince Phillip's Prestel mailbox getting hacked. I was asked to confirm our system had no undocumented features. I suggested he go to BT procurement and get them to get Microsoft to sign such a declaration. I think he did - the question was dropped from the checklist. I later found out his background wasn't IT at all, it was perimeter security.
-
Monday 8th January 2024 22:01 GMT David 132
Re: About box Easter Eggs
ISTR that in Windows 3.x, if you clicked repeatedly on the Windows Flag logo in the About box, it would eventually switch to a bitmap picture of Bill Gates instead and a scrolling list of the developers.
And let's not forget the Wolfenstein-alike hidden in Excel 95, or the entire flight-sim that came with Excel '97. That latter was pretty much the straw that broke the camel's back, and caused - given that the business world at the time was waking up to code security issues in general - the total clampdown on Easter Eggs at Microsoft and beyond.
-
-
Monday 8th January 2024 18:19 GMT martinusher
Rewriting in assember 'to go faster'?
That's what doomed the project/product. Interpersonal rivalry was just the garnish.
I could understand taking an original application and rewriting it in C -- parts of it, anyway -- because it would make maintenance and extension easier. Never the other way around.
-
-
Tuesday 9th January 2024 03:45 GMT nintendoeats
Re: I've always thought
Hmmm, except that software often goes through periods where performance is not acceptable during development. If you give the devs minimum spec machines, then you are needlessly hampering them during those times. Also, sometimes there are features which simply require more grunt than the minimum specs.
Example, I was working on a 3D Display system. Most users were on integrated graphics, but I had an Nvidia card. When working with high detail objects (especially those with transparency), the Intel GPU was nowhere. We developed a system to degrade those objects automatically, but developing that system obviously required lots of testing and fine-tuning. If I had been stuck with the Intel the whole time, that process would have been painful on the deepest levels.
-
Monday 8th January 2024 20:09 GMT Boris the Cockroach
Praise be for logs
happened to me.
Night shift had a big crunch , destroyed the production cell.... had to rebuild everything the next day... of course the boss is doing boss things like blaming me... got the logs out of the machine showing the edit on the program was at 8pm .... 3 hrs AFTER I went home.........
The night shift guy was duely spitted and roasted when he turned up....
-
Tuesday 9th January 2024 00:23 GMT M.V. Lipvig
Had similar
I was the platform manager for a now-bankrupt Canadian telecom system. I'd used the system as a tech so knew what the users needed to use it effectively. Worked with the vendor, had backups going flawlessly, and all equipment could be accessed perfectly in a linear approximation of the circuit layouts the techs used.
In comes a new manager, and he has a brother in law to hire. Unfortunately (or fortunately when looking at the long view) the manager decided I was to be the sacrificial goat. He hired the BIL, assigned him as my second for training, and once he thought he had a good handle on it I was moved out. I was able to jump groups which the boss gladly approved.
Anyway, here's the pettiness. The BIL decided that if he was going to make his mark that he was going to have to change what I did. He first got out his Linux for Dummies book and "cleaned up" the vendor developed backup scheme, then he proceeded to redesign the spans to different from how I had them. About a year later, the platform was unusable, the last usable version that I did was long gone, and the techs went back to just logging directly into the equipment to test.
Later, when senior management came to me to find out why I screwed it up (they tried throwing me under the bus) I pointed out that I had been replaced as platform manager on this date, and here's the email showing that my admin privileges had been revoked. Without admin privileges, I had no access to make any changes after X date, and it was working properly up until then. I never heard anything about it again. Curiously, I never saw that manager or his BIL again either, although I worked there another 10 years.
-
Tuesday 9th January 2024 06:04 GMT ldo
To Require Alignment Or Not To Require Alignment?
I thought for a very long time that CPUs should permit unaligned accesses to multibyte objects without crashing. Yes, it hurts performance. But on the other hand, it simplifies programming somewhat by removing a source of crashes, and I definitely feel that correctness should come before efficiency. Let the programmer decide whether they want to pack things or not!
-
Tuesday 9th January 2024 12:49 GMT Doctor Syntax
Re: To Require Alignment Or Not To Require Alignment?
You reminded me, for the second time recently in these pages, of a FOSS project that wasn't an emulator. Worked fine until it didn't and it didn't because it crashed on the splash screen of a product I ran under it. A lot of other users reported issues with other products.
A bit of git-slicing took me to the responsible commit. It added a case option that said if the driver reports it needs 24 bits per pixel use 32 instead. The devs argument seems to be that using 24 is slower for gaming - hint, guys, if it crashes you'll never know whether it's slower or not.
Obviously, knowing where to look I just edited the function on every release, rebuilt and continued using it. I used to run their test suite on every release and it failed multiple tests on the unedited version and ran more or less perfectly with that one edit but eventually they started refusing to accept results from the fixed version. I suppose that was easier than having the results of their obduracy staring them in the face every month or so. Eventually there was a big rewrite and I couldn't be arsed to go looking for where they'd moved that erroneous assumption so I started using a VM instead.
Eventually I moved to different H/W and the problem went away but it's one project I've never really been able to take seriously ever since.
-
-
Tuesday 9th January 2024 07:51 GMT hoofie2002
Happy Birthday
Many Years ago I was working on some C code in Oracle on Unix that did a lot of data analysis and reporting on pharmacy dispensing scripts which was then sent back to the drug companies.
The previous developer, who was f&&&ing useless but his mother was a Director...., had put in code to print "Happy Birthday" everywhere when it was his Birthday.
In a Production System.
Across all the printed reports.
Which were sent out to Customers.
Twat.
-
This post has been deleted by its author
-