
737 MAX
Do I need to say more .....
We're back again with Who, Me?, The Register's Monday morning crowdsourced tale of reader misdeeds and close calls. Today's confession from "Pete" will tighten the sphincters of those who flew on Boeing's finest back when 1990 rolled around. Pete was something of a multimedia whizz at a time when Windows 3.0 was a thing, …
So many little sentences in that description that really speak of a bad day in the office:
"[...]the cockpit warning system sounded again with the "all engines out" sound, a long "bong" that no one in the cockpit could recall having heard before."
"They immediately searched their emergency checklist for the section on flying the aircraft with both engines out, only to find that no such section existed"
And most strikingly, the entire thing only took 17 minutes.
"The aircraft's fuel gauges were inoperative because of an electronic fault indicated on the instrument panel and airplane logs; this fault was known before takeoff to the pilots, who took steps to work around it"
Another highlight for me... I get slightly stressed when driving a car with a defective fuel guage, and the worst case doesn't involve the car falling out of the sky!
"
Another highlight for me... I get slightly stressed when driving a car with a defective fuel guage, and the worst case doesn't involve the car falling out of the sky!
"
Most light aircraft have extremely inaccurate fuel guages - mainly due to the difficulty of having accurate sensors in long, thin wing tanks. Not even dipping the tanks is necessarily accurate because the aircraft might not be completely horizontal, and just a few degreees off-level will cause the amount of fuel under the filler cap to differ a lot. Before take-off, the captain should cross-check that their is sufficient fuel (plus safety margin) by checking the aircraft log to see the number of hours flown since last refuelled to full tanks.
My family owned a second hand Jeep Cherokee in costal New England. There was an obscene number of miles on the clock when we bought it and we'd added a few too. It was just used during the summer months when somebody was there. The petrol tank had rusted at the top during the harsh winters and warm humid summers. A small hole had developed at the top of the tank and went all the way through the metal. It didn't leak much unless you were going uphill and the tank was more than 3/4 full. The guage or the sounder was faulty anyway so judging how much fuel to put in was difficult. Gaffer tape helped a bit but hardly an approved repair method. Eventually we decided to scrap it as had numerous other faults so it wasn't going to pass the US MOT equivalent without spending some serious $. A friend asked if he could sell it instead of scrapping it to be more environmentally friendly. After we'd stopped laughing said yes.
They got a staggering $2500 for it despite listing all the faults accurately and we gave them half. Yet to experience a Lancia.
You often do not fuel to full because can make the plane too heavy with desired load.
That is why you return a hired plane empty, let the next pilot fill as much as they want.
So yes, pilots do depend quite a bit on dipping the tanks.
Personally I just always filled up and ignored the fact that it is a bit heavy, with the extra weight in the wings.
I get slightly stressed when driving a car with a defective fuel gauge
Not a problem with an early Mini Moke - undo the fuel cap and take a look. The aperture is directly on the tank.
https://www.telegraph.co.uk/cars/classic/mini-moke-prisoner-restored-50-years-starring-tv-series/
Another highlight for me... I get slightly stressed when driving a car with a defective fuel guage, and the worst case doesn't involve the car falling out of the sky!
Most of my driving is vehicles without a gauge. I don't recall any of the tractors I operated having a functional gauge, most cars it's only an indicator, and the only bike I've ever had with one - by the time I got the bike it only measured the top 1/3 of the tank (so E was anywhere between 12L and 0mL).
It was a matter of checking before setting out. If you need to have 5L in the tank then put 5L in. As Cynic_999 mentioned, the shape of the wing tanks can make gauges a lot less effective, and loading for your needs plus safety margin should be the norm.
In this case, the crew expected a certain amount of fuel but someone cocked up and gave them much much less. The right number, but the wrong measure. Hopefully since then much effort has gone into making sure such mistakes could never happen again (going all metric or all Brit-Imp would be one simple way)
You're both right of course, I can almost hear it.
'...it was at this moment that a very loud, and very deep, bong resounded through the flight deck.
"What was that?" cried Arthur.
"I expect it's the gong for lunch." replied Ford.
"How can you think of lunch at a time like this?"
Zaphod cut across him, "Pipe down monkey man, I've got this all under control."
"I'm not sure you do," said Trillian, "look, the engines are stopping."
The left engine had indeed stopped, and the right one was spluttering like a man brushing his teeth who has just spotted the tube of haemorrhoid cream on the sink.
Ford turned to Marvin. "Do you know what that was?"
Marvin's lights blinked sullenly. "Of course I do."
"Well what was it?"
"It was the out of fuel warning."'
'The Hitchhiker's Guide has this to say about flying a plane with no fuel: "Don't".
If you can't avoid it then the Guide recommends that you...'
> Although not competant enough to check the refueling manifest.
It is not clear he could have detected the error based on what information was available to him. A pilot has to rely on the ground crew, he cannot personally inspect every detail. The wikipedia entry says he had the fuel level checked also with a "floatstick", but still got the wrong answer. Maybe there was also confusion with what units that device used?
He was given the accurate amount of fuel by volume in gallons.
They did the conversion from gallons to imperial pounds and punched that number into the flight computer which, like potato chip bags, goes by weight not volume.
Computer was metric.
HOWEVER...the real screw up is bigger than the flight crew -- they took off knowing they had a defective fuel gauge and would rely on the computer to calculate their fuel remaining.
It's one thing if you're in the air and lose a sensor to go, "Well shoot, that's OK we have a backup...says we have enough fuel, no need to declare an emergency, let's just complete the flight." It's another to take off knowing you're down to the backup system.
This post has been deleted by its author
They don't get to determine whether the gauge is an issue on their own -- except, perhaps, in a career-limiting, "sorry, I won't fly this thing."
Instead, it's determined in advance by the manufacturer's MMEL and the operator and local agency who figures out the MEL. If something's on that list, then you're good to go as long as you follow whatever rules the list specifies.
Just about all of these incidents and accidents reveal interesting things about the inter-related parts of the system, rather than just a simple "this guy blew it." It's often described in the US as getting all the holes in the Swiss cheese to line up just right so that an accident can occur. (I know that folks elsewhere have different cheese names. :->)
The number of things that had to go wrong, in sequence, for that incident to happen is enlightening. It's almost a reverse jackpot -- just as many thing have to happen exactly "right" but the outcome is negative instead of positive. Also interesting is how many things went exactly right after the fuel exhaustion to allow a safe landing.
@whitepines. In aviation safety is is sometimes referred to as the Swiss Cheese effect. So many holes have to line up for the event to happen. Any breaking of the lining up stops the failure. Multiple redundancy can be expensive or heavy but it is safe. Also the reason aviation has so many cryptic checklists and acronyms. However I feel more relaxed flying having done them as it means I am as safe as possible. Always feel edgy if I realise a check item was missed. And yes, the paperwork and inquisition if there is an event. BB is appropriate as there are now types who actively look for others faults to dob into aviation management for any trivial event
Thank you. Have a beer on me ---> (invisible beer)
Maybe I'm a geek or something but the main reason I clicked on the link was to check it was what I thought it would be. And it was.
On the other hand, there are plenty readers who won't be quite so familiar with the circumstances and the story. Share and enjoy.
I lived nearby-ish, in Winnipeg at the time. The news was certainly memorable. A couple things really stood out then, and are mentioned in the article.
- The captain was also an experienced glider pilot, using glider techniques to pull it off.
- Subsequent attempts by others to carry this off in a simulator all failed. Maybe some since, who knows?
- Subsequent attempts by others to carry this off in a simulator all failed. Maybe some since, who knows?
I recall seeing a documentary on this and another case - one where a hydraulic leak meant they were out of hydraulic fluid before they had a chance to land. In that one they realised they could use bottled water to keep the hydraulics functioning long enough to land (choice of either land with needing a major repair, or complete loss of life + plane).
I thought it was for the latter where it was said this was added to simulators and no crew could come up with a solution, but perhaps it's both, or the former? The one I recall seeing they landed on a dis-used runway that was being used for a drag-meet or some other car event.
He was fond of telling a story of keeping the hydraulics topped up not with bottled water, but with what they had bottled up in their bodies.
I have no doubt it was more common than most of us want to know. There must be a reason why so many models of aircraft have an emergency filler hole in such a convenient place!
Also of interest
Air Transat Flight 236 was an Airbus A330 on a transatlantic flight that ran out of fuel due to a fuel leak caused by improper maintenance. Captain Robert Piché, an experienced glider pilot, glided the remaining 65 nautical miles to a safe landing.
And we all know that Captain Sullenberger was also an experienced glider pilot.
Perhaps there's something in this gliding thing.
I am no pilot, but I think there is something with the people who know how to fly a plane as opposed to the one who know how to talk to the computer that flies the plane: it all boils down to knowing and feeling how the machine reacts and taking the right action because you have done it previously in other circumstances.
@Oliver. You will be pleased to know that many pro pilots fly light aircraft and gliders so they maintain "real" aircraft piloting skills. Some airlines encourage this after the reviews done after the AirFrance dump into the Atlantic. Some of the worlds best acro pilots fly heavies for a living so you have serious skills and training up front
Real aircraft manuals for light aircraft often use a nasty mixture of units depending upon who wrote which page. A real mess.
My favorite is "gallons", which usually mean US gallons unless it is an old English plane in which it means UK gallons, with nothing to indicate which.
And they tell you to take the manual very seriously!
Despite all its flaws, and Adobe's flaws, PDF is the most reliable digital document format I know to date.
I'm not saying it is flawless, but it is the next best thing compared to a lump of paper.
Pedantic grammar icon because PDF preserves every paragraph and spacing from the printer-sent version, instead of just deciding to "ignore" it like some other text editors...
I wouldn't call PDF a "text editor" format. And I'd hate to rely on it in a time-critical scenario, given how often large documents will freeze, crash, or just go completely blank if kept open for too long.
PDF is a very useful format, but if a flight crew is relying on it to keep the plane in the air, I'd worry about that.
There is, or was, a B52 based at Barksdale Louisiana which is/was being flown by the third generation of pilots. When it was new, one gentleman was assigned; some years later, his son got the same aircraft; some years after that, his grandson flew it. There were jokes about inheriting the family bomber. Allegedly great-grandson is now in the Air Force Academy and may yet have his turn on that aircraft. One hopes that the USAF is better at maintaining things than civilian airlines.
Not only has the MAX a problem with the single sensor, its main problem , according to pilot friends, is because they fitted the extra powerful CFM LEAP engines (same as actually work on newer Airbus320) but the 737 MAX geometry, location of engines , is compromised by its lower loading height (almost no lifter required to delicately toss in the hold-bags)
Throw in the dubious poorly tested corrective anti-stall fly-by-wire software, and the fact that the actual BEST position for the CFM International LEAP-1B engines in the Beoing 737 MAX airframe would be about a foot below the runway!, and let's hope that Beoing can successfully rewrite impossible moment of inertia physics problems, by code tinkering. All allegedly. As who really knows?
"Not only has the MAX a problem with the single sensor, its main problem , according to pilot friends, is because they fitted the extra powerful CFM LEAP engines (same as actually work on newer Airbus320) but the 737 MAX geometry, location of engines , is compromised by its lower loading height (almost no lifter required to delicately toss in the hold-bags)"
You mean the entire, well-documented reason that MCAS was installed/required in the first place.
Software can change MCAS's ability to repeatedly fight the pilot's override, MCAS was relentless and definitely required some re-training which Boeing denied previously. Also there was a vital Gauge that Boeing decided to sell as an additional $10K optional extra.
It hadn't occurred to me before, but if say the Hudson incident had happened to a 737-Max, my guess is the outcome would've been considerably less "miraculous"?
(I'm assuming here that the sensors are correctly functioning in this case, but the same problem of a double-engine bird strike has happened).
That would depend on the captain and the co-pilot. The miracle on the Hudson was only possible, because the captain (Chesley Burnett "Sully" Sullenberger III) was a very experienced glider pilot, who also had the guts to rewrite the book as starting the auxiliary power unit aka APU in the tail was the last item on the checklist while it was the first thing he did and Airbus revised that checklist based upon this incident (moved starting APU in case of a double engine problem to first position). With a similarly experienced glider pilot captaining a B737 MAX in a similar situation, the outcome would probably be similar, especially since there now is an example.
No. They they outsourced software to HCL, my corp outsources software to HCL, and having seen how they work I won't be stepping foot on a 737 Max, a 737-8200, or whatever else they want to call it.
I recently flew on a Norwegian airlines. I always look at the safety instruction card on flights to verify what kind of plane I'm flying. Unfortunately we had already left the gate when I looked. The card said B737-800/B737-8MAX. When I asked the flight attendant about the MAX part he said "it's not that one!" Apparently they were too cheap to even print separate cards for the separate planes. Presumably the emergency information was the same. But it freaked me out for a minute.
The whole reason for the problem was that Boeing wanted everything about the MAX to be the same as for older 737 models. So I'm sure the passenger-level safety information was the same. Thus saving airlines the expense of maintaining an extra set of - well, everything really, but certainly including safety instruction cards.
One of the problem with certification is that you can only be certified for one type at a given time. So different certification would mean that a 373NG pilot could not fly a 373MAX and vice versa. Instead of having one big pool of pilots that can fly any plane in your fleet, you end up with two pools and managing the rotation of pilots becomes less simple.
"They they outsourced software to HCL, my corp outsources software to HCL, and having seen how they work I won't be stepping foot on a 737 Max, a 737-8200, or whatever else they want to call it."
+1. The issue with many indian companies is, and I'm quoting one of my really good indian colleague, "false diplomas and false certifications".
It is really disturbing to see a dude with a dozen of Cisco certifications unable to bring up an IPV4 interface on a router ...
+1. The issue with many indian companies is, and I'm quoting one of my really good indian colleague, "false diplomas and false certifications".It is really disturbing to see a dude with a dozen of Cisco certifications unable to bring up an IPV4 interface on a router ...
You must know an old friend of mine. He was hired on his Cisco certifications. Then on a job I asked him to make up a patch cable. Turns out he didn't even know what an RJ45 plug was for, had never touched network hardware.
In NZ a few years back turns out a certain training establishment didn't do "open book exams", they did the exams for the students. They were also paid a considerable amount of $$,$$$ to bring the "students" into the country, get them qualifications, get them jobs, and get them through the immigration process.
I'm pretty certain NZ and India aren't alone in these sorts of things.
This is well worth a full-blown article with some solid journalism.
I have noticed some quirks that could had me questioning how the individuals involved could have obtained the qualifications they held. The sort of approach that all the interns I've hired would have known better.
I hadn't heard before today that fake qualifications were common.
I have noticed some quirks that could had me questioning how the individuals involved could have obtained the qualifications they held. The sort of approach that all the interns I've hired would have known better.
A rapidly changing industry, ageing brain and various other things can cause that. Not that I'd know from experience or anything [wanders off trying to look innocent but failing miserably]
I hadn't heard before today that fake qualifications were common.
I've been trying to find the original articles but not much luck I'm afraid. Here's one from about the time (stuff.co.nz) but not quite the same thing. It is midnight and it's been a long prick of a day so my skills aren't up to it tonight I'm afraid. There's more out there from more recent events as well, at least 4 institutions closed or loosing accreditation just this year (and may be even more recent - I only skimmed those items as I was looking for a specific older story).
This is well worth a full-blown article with some solid journalism.
How about it Team Reg? Surely if anyone can do a great job in this, you guys should be able to!
My Medical friend tells me that any medical selection panel hiring from India should include at least one person trained in India. To identify the applicants with shoddy qualifications or qualifications from shoddy institutions. India is a huge country, and there are some top-notch medical, engineering and CS graduates from top-notch schools. Also, the opposite.
The Lion B38M crash report on the Aviation Herald, in case it's of interest:
I was doing the same, fortunately xBase did actually store the century.
Was also playing a lot with SET EPOCH as we were dealing with dates both in the past (DoB) and in the future (appointment date) so wanted to help users to interpret 2 digit years correctly.
Took the opportunity to move from Clipper 87 to Clipper 5 and add Extendbase and other goodies. All completed by 1996 when I moved to do it all again for a new employer.
From 1988 until 1995 I was in charge of developing an image processing system for a microscopy system, in a mixture of MS-Pascal (the previous programmer shunned C), and MS-C. The choice of MS compilers was foisted upon us by the fact that the Matrox MVP-AT/NP and PIP-1024 boards we were using only came with an MS-C compatible library. I recall a situation where I was baffled by crashes caused by a piece of code I had just written, and I couldn't for the life of me see any fault in the (very simple, just 20 LOC) procedure at the heart of the problem. As input it took a linked list which had an even number of nodes, so if you started with the current pointer pointing at the head of the list, you should be able to write
IF current^.next <> NIL THEN current:= current^.next^.next;
to move down the list in strides of two. Result: CRASH. I checked whether the input was wrong, by counting the number of nodes, but nothing was wrong. I then changed the code to:
IF current^.next <> NIL THEN
BEGIN
current:= current^.next;
current:=current^.next;
END;
and that worked flawlessly. I am not sure if this was an optimizer bug, or a bug in the parser, but I did feel an urge to jettison the compiler manual out of the second-floor window.
As well as a couple of bugs with templates to do with const parameters and failing to instantiate classes with template inheritance, I found a long-standing off-by-one bug in the library's istream width() modifier, where it would give you 1 character less than the specified width. The error lasted at least from 4.2 and wasn't fixed until version 8.
A fun bug in an old Solaris compiler I discovered accidentally after at typo was setting the = 0 hack (sorry, but it is) in declaring C++ pure virtuals to non zero. eg:
virtual void myfunc() = 1;
The compiler would reach that line, apparently contemplate the meaning of its existence for a few seconds, decide it was meaningless then vomit up a page of stack trace and die with a core dump.
I was doing some bare metal embedded work a few years ago and had to convert strings (sent over a serial port) to values to set some threshold voltages and gain in sensors using digital potentiometers; the function I used was strtoul().
There was an off by one error and it would always point at the str[1] element of the string instead of str[0] even though all other standard indexing worked just fine - I had to prepend an ascii zero in the string to make it work.
This isn't the only issue I know of in the AIX 3 XL C implementation. I was working for IBM myself at the time, and spent a while tracking down an obscure bug: sometimes the GL openwin() call would fail for no good reason. First I narrowed this down to openwin failing after using the libnsf library to read NSF-format data files. Then I narrowed it down further to openwin failing after a certain free() in libnsf.
I confirmed that free was valid (a pointer returned by malloc or realloc, and not already freed), so I suspected heap corruption. So I interpolated all the heap functions with versions that dumped their parameters to a text file, then wrote an awk script which matched them up. They were all OK. I added canary pads before and after all areas to check for under- and overflow: nothing.
So I hacked my script to simply perform all the heap operations from the output file, touching a few bytes of each area, and then try openwin. And openwin failed.
Some binary searching later, and I had a precise reproduction: Allocate at least 11 areas of at least 64 bytes, and touch at least the first 8 bytes of each area, then free all 11, and then openwin fails. I fired that off to the XL C team.
Turned out to be a bug in the implementation of free().
When I procured our first RS6000 (530 H I believe) server we had 2 huge issues.
1/ We couldn't import it until the DOD in the US had issued an export licence for it as it was classed as a supercomputer
2/ where to put it. All the other kit in the DC was in 6ft high cabinets this thing arrived and was basically a trip hazard. (this is before rackable servers).
In the end we settled for putting table in the dc over the server and sitting the console on top. This broke the 'no office furniture at all allowed" in the DC rule but prevented a very, very expensive accident.
Ironically about that time we also ended up buying a couple of ICL S39 L30 cpus which were in low level cabinets and which we had to purchase special racks for as they took up so much very expensive floor space so had to rack them one above the other. This had the not terrubly happy side efect of meaning that only someone over 5'10 could safely change the disk interface in uses as the switches were on the top bezel of the cabinet.. I can't find a link to a racked image but the 'marketing genius' who designed the cabinets to sit in an office environment 'forgot that they would be connected to standard tape drives and line printers which had to reside in a DC. I only ever came across 1 installation where they were outside the DC and that was a disaster. The libnk to the ICL technical jhournal with a picture of the processors contains an over view of the system including the incredibly fast 50MBps fibre link between fast devices and the 10MBps ethernet usd to connect printer / terminals etc. This was blisteringly fast back then. https://www.fujitsu.com/uk/Images/ICL-Technical-Journal-v04i03.pdf
Should have bought a RS/6000 model 930H. Same machine in a rack (with quite a lot of empty space, which you could populate with I/O drawers containing disk or 5.25 format tape devices). You could also put a UPS in it. Much more expensive, though.
Still did not make it a mainframe, although on most integer and all floating point calculations it beat the pants off contemporary mainframes. The leap in speed that the RS/6000 introduced really gave the industry a bit of a kick up the pants, although HP responded quite quickly, with IBM and HP introducing new systems to leap above the over seemingly every quarter for a few years.
And then came the Alpha...
And then came the Alpha .... but not in a sensible rack-mount for a couple of generations. Not until the AlphaServer 4x00 series if I remember correctly. While HP's rackmounts for the D- and K- series were a bit of a fudge Digital Equipment's approach was only a bastardised shelf-mount for the smaller boxes (headless workstations, 3000 AXP and AS 800) and possiblly the 2000. No way the 2100 or 4000 (Cobra family) could be mounted in any standard rack. Of course I'm discounting those models that were built into their own cabinet (7000 AXP and later).
It's worth pointing out that an iPad will run rings around an early 1990s RS6000, performance-wise. Storage-wise too, I expect. Hell, I got my kids Kindle Fire tablets because they're cheap-cheap, and they'd run rings around the old 1990s RS6000s.
Still, I miss the 90s. That'll be age creeping up on me. :-/
A section of a technical manual I use, ends with the phrase "It is essential that the".
The main UK dealer and importer of the machine assures me there are no missing pages in the manual.
The German version of that section continues by listing the commissioning sequence and several settings that need to be checked or adjusted during the process.
The product in question has a reputation in the UK for being troublesome to install and failing soon after being repaired or serviced.
Let me guess: The German version is the original and the English version is the translation. For some reason translation to English from other languages is nearly always shoddy (infamous example consists of the works of French author Jules Verne). One of the few known exceptions consists of the works of Leslie Charteris, who was bilingual and did his own translations (in both directions as he wrote half as English original).
And yes, whenever I have a chance, I prefer German originals above English translations. No problems with the translation and usually cheaper as well.
I could mention one (Olympic!) sport where the rulebook is written in German (the Governing Body being HQ'd in Germany) and "officially" provided in German and English.
In the event of a conflict, the (translated) English version takes precedence.
Not likely to ever be quite as catastrophic as say, aircraft maintenance manuals, but curious nonetheless.
"For some reason translation to English from other languages is nearly always shoddy"
Unfortunately translations are now mostly bought by the procurement department, without input from the engineers or technical writers. So the contract goes to the lowest bidder or the folk who are best at tendering procedures rather than translating. The downward pressure on price also means there's no time for the translators to familiarise themselves with the product. Essentially, a translator should be at least as familiar with the product as the user. That should include doing the courses and getting hands-on experience (which is why I have a chainsaw certificate, etc.).
Machine translation adds other problems. It can be an effective tool, but only when used by experienced translators who are familiar with the product. I've seen examples where TM left out little words like "not".
Fortunately there are some good suppliers out there (I'm just off to meet one based in Wessex) as well as customers who appreciate that their high-quality product deserves high-quality documentation (which also reduces product support costs).
Apols for this long-winded rant, but it's something I feel v strongly about, having been in this business for over three decades and being not at all happy with the current developments in the market.
Unfortunately translations are now mostly bought by the procurement department, without input from the engineers or technical writers.
While I agree with your rant, I strongly suggest you re-read my post, as the problem of shoddy translations to English is already well over a century old. Translations from English used to be quite acceptable, mostly because they are/were done by people pretty fluent in English translating into their mother tongue.
The main problem with translation to English, in my experience,is that it is a vague and imprecise language without proper verbs and noun declensions, which makes it very hard work (as well as field day for lawyers.) French, German, Russian can all express with precision what in English can take lots of circumlocative explanation.
That may be true for literary texts, but for technical manuals English is by far more precise and specific than French. I once had the opportunity, early in my IT career, to do some Assembler coding on a microchip from the French chip maker Thomson. My mentor asked me if I wanted the English manual or the French one, with a hint of a smile. Being bi-lingual, I answered that I would prefer the English version, if you please. He was obviously pleased, and that intrigued me.
When I asked why, he just handed me the French version to compare.
Oh. My. God. What a mess. They tried to keep the pages identical, but most things that were said clearly in one sentence in English were a complete paragraph in French, and still not clear after all that.
That's when I learned that, at the time at least, Thomson's official stance was to send the manual in English when you bought the compiler. The French version was an optional extra.
"for technical manuals English is by far more precise and specific than French."
Really?
"most things that were said clearly in one sentence in English were a complete paragraph in French, and still not clear after all that."
Well I've heard it said that certain glossy brochures catalogues etc from a now-departed multinational were longer in French or German than in English (DECdirect UK catalogue thinner than DECdirect Germany, for example). But that's hardly what I'd call a technical document, though others might disagree.
"Assembler coding on a microchip"
That's more like what I'd call a technical subject/document, and I'd expect that a decent technical description of the instruction set and operations, perhaps based on something like BNF notation, could be the same in English or French, and hence equally precise, once the terminology had been defined.
A compiler or assembler user guide? Different subject.
Well, I too have worked with Thomson - VME systems coded in assembler, as it happens, and I preferred the French manuals. I suspect that you had a manual originally in English translated into French by a nontechnical translator, while I had a manual originally in French and an English version produced also by a nontechnical translator. In my time I have had real problems finding decent translators for localising software, and even dictionaries frequently contain howlers. Google Translate, for instance, just gives up on those important French words bailable and ponable.
My all time favourite, however, was a Germany magazine article about the Intel 425. The author was clearly clueless and was working from the English data sheet,and had just given up and was prepared to print any old gibberish. According to him or her, the 425 contained a small boat floating around a headland. I'm sure you can all easily work that one out.
For some of the most amusing data sheet howlers, take a look at older Japanese parts where the datasheets have been translated (poorly) into English.
A classic for this was the passive parts (since significantly improved) but you can still find datasheets for current parts where the syntax is just wrong.
New Japan Radio still has many such datasheets such as this one (which is not too bad but the syntax can make it difficult to read at first pass).
Not as good as the spoof Write Only Memory from Signetics although that was deliberate whereas the Japanese datasheets are unintentionally amusing.
I have a 1960s Service Manual for a Japanese tape recorder that asks the engineer to "ceremonially disembowel the unit with a screwdriver number 2". There are several other interesting translations in there and their explanation of recording bias is utterly bizarre!
French is more static as a language than English which probably explains why their technical documentation is both elegant and verbose.
An early technical manual in French is the 18th century treatise by Couperin on harpsichord playing technique. Its perfectly readable. Contemporary English texts.are much more difficult to read. (This is coming from someone who's a native English speaker with very poor French language skills.)
>is that it is a vague and imprecise language without proper verbs and noun declensions
Now you understand what the job of a technical writer is -- its not to write the text but to turn engineers' screed into something clear and unambiguous. The job is often neglected today as just another unnecessary overhead but a good technical writer is worth their weight in gold.
Google Translate and other such apps have also driven down the price and quality of translations - many companies will run text through these programs and then ask the "translator" to clean up the resulting product, while expecting it to involve less work and paying accordingly.
I noticed some really poor translations when watching the Netflix show "Zone Blanche" where the subtitles used American slang phrases that poorly communicated the original French dialogue or idiom. Thankfully it's only a TV show and not an airplane manual.
As I recall the english subtitles in Das Boot (if you watch in the original German) are different to the dubbed english spoken by the actors. It being a co-production between Columbia Pictures, German TV and I think the BBC - mostly they used the original actors to dub themselves - so it's still top quality acting. I seem to remember the subtitles are less polite than the spoken english version. Although it's difficult to get the naughtiness levels of swearing right between languages.
They've since digitally remastered it, and went back to the few original actors who'd been badly dubbed by other people, and got them to do it themselves - making the re-masters much better. I know I should watch foreign films with subtitles (while smoking unfiltered french cigarettes), but I find them hard to read without sitting too close to the telly. In cinemas I have to use a monoccular/mini-binoccular and that's just bloody painful.
I watched "Das Boot" with the original speech and English subtitles, sitting beside someone who was really fluent in German. Said person would, frequently, exclaim "That's not what they said!" when a particularly egregious bit of subtitling popped up. Early versions of "Das Boot" are apparently infamous for both the subtitles and the dubbing, in which a lot of the original German vanished. Later versions were apparently closer to the German.
Subtitles are a very specialised art form as a literal translation is nearly never possible, so sometimes the meaning of the conversation must be caught. I used to have the same kind of problem with English and German spoken movies (and TV shows) with Dutch subtitles. Nowadays I hardly ever bother reading those subtitles anymore.
Weirdest experience I ever had in a cinema was watching Saving Private Ryan in Brussels, in English with French and (presumably Flemish?) subtitles.
The sound wasn't terribly good and after a bit I realised I was reading the subtitles and "hearing"the characters speak French.
(The conclusion I drew from the film was that if my father - actual first wave on D-day - saw it, he would have a major stroke, so I decided to keep him ignorant of its existence.)
There is also the problem that there is often not space for all the words of a proper translation - they have to cut the sentences without destroying the meaning.
And then you come to words that just do not exist with all their connotations in other languages - look up "Vergangenheitbewältigung".
Many years ago my Mother (a student of French among her many accomplishments) was amused when watching a late night movie (in French) set aboard ship, that the subtitles translated the captain's "Nom d'une pipe!" as "Shiver me timbers!"
Some of my schoolmates at St John Backsides Comprehensive were hysterically amused that "Name of a pipe!" was a curse.
I only speak broken Franglais, but I've always assumed it was obscene idiom. It has been my experience that all surreal French statements boil down to obscene idiom.
It has been my experience that all surreal French statements boil down to obscene idiom.
I was once in the company of a French friend who read a letter containing some bad news and exclaimed "la vache!". I understood enough French to translate that as "the cow!" and burst out laughing at such an odd thing to say, before realising from his reaction that he was actually quite upset. I still don't know what the translation is, but evidently nothing to do with livestock.
"Google Translate and other such apps have also driven down the price and quality of translations - many companies will run text through these programs and then ask the "translator" to clean up the resulting product, while expecting it to involve less work and paying accordingly."
Unless Machine Translation is highly tuned to the project, post-editing its output is v time-consuming and likely to be more expensive than less automated approaches. I just re-translated a whole corporate magazine because the original translation (which the client sourced from some agency) wasn't good enough. Discovered later that the original translator had probably used Deeple and post-edited that - but didn't edit it enough :(
However, Computer Assisted Translation (translation memory systems) are v useful for increasing productivity and consistency while relieving the translator from the boring, repetitive bits.
You all think its fun and games when reading the english translation to a french/german technical manual
At least those 2 langages are close to english so any semi trained person can look at the german and think "ahhh thats what they ment" when trying to understand the badly written english.
We had japanese machines..... the standing joke is that the english technical manuals were produced from the original japanese manuals by someone who spoke neither langage.
Fanuc being particually guilty of this.......
Icon... for all the tech guys trying to sort problems using the technical manuals/guides instead of cold hard 3lb lump hammer shaped technology
the standing joke is that the english technical manuals were produced from the original japanese manuals by someone who spoke neither langage.
Until some time after 1853 (when US Commodore Matthew C. Perry came to Japan), the only way to translate from Japanese to English (and back) was by way of Dutch.
It's worth reading the notes for the Loongson OpenBSD port :
'Unfortunately, most of the Loongson 2F-based hardware available at that time suffers from serious problems in the processor's branch prediction logic, causing the system to freeze, for which errata information only exists in the Chinese documentation (chapter 15, missing from the English translation), the only English language information being an email on a toolchain mailing-list'
Been there, done that.
It's Somebody Else's Problem when compiler bugs or optimisations introduce bugs into your own code.
At least this software problem wasn't actually flying the 747. The IBM RS6000 was a new beast in 1990, how many people at the time knew that the C compiler would do that with those flags set? Obviously this is situation where 100% comparison test with the previous paper book seems justified.
Boeing gambled the companies entire future on the 747 being a success, it was built by a company that wanted to build the best machine possible and everyone involved knew that the effort put into the designs & actual metal bashing could be a deciding factor in the life or death of several hundred people. More to the point, Beancounters were not involved in the design decisions.
Half a century later Boeing gambles the companies future on a software cludge to shortcircuit the type certification process and produce a cheaper aircraft to buy.
The 737Max will forever have a bad smell associated with it because the decision was made to save money, far worse that the Comet-1 design problems due to the unknown effects of how metal fatigue happens, far worse than the DC-10 where shoddy maintenance brought them down. The only good things to come out of the Max will be that software will now be rated level with design & maintenance as a no-shortcut item and no airline will want to be seen buying self certified aircraft.
"he IBM RS6000 was a new beast in 1990,"
It was a beast in general - an extremely powerful, well built machine with a good version of unix (AIX).
" how many people at the time knew that the C compiler would do that with those flags set? "
I've always had an unwritten rule of never setting optimisation flags on ANY compiler unless speed was absolutely critical. There have been too many cases of optimisation screw ups with a number of C/C++ compilers for explicit optimisation to be a good idea in any build. The problem is the compiler has to be smart and make a lot of assumptions, assumptions which very occasionally are wrong, and compiler optimisationg code is so complex it often has its own set of bugs as in this case.
The gnu c++ compiler, when optimizing, assumes you are writing correct code and will totally screw you if you are not. Examples are not following aliasing rules, certain signed/unsigned conversions, pointer comparisons between pointers to different allocation units etc.
The problem is bad code but you have to be a language lawyer to spot some of these problems.
> I've always had an unwritten rule of never setting optimisation flags on ANY compiler unless speed was absolutely critical.
On the Unix-like platforms it is the convention that if you don't set any optimization flags, you get very dumb code that maps closely to the source (and therefore works well with debuggers), but is needlessly slow, because even obvious optimizations are not done. GCC follows this convention.
By contrast, on Windows the compilers always do the basic optimizations, unless you explicitly disable it. One could say that Microsoft compiler without optimization flags is roughly equivalent to "gcc -O". This difference in conventions sometimes trips people up and makes them claim GCC generates much slower code than Microsoft.
The basic "-O" level of GCC is pretty safe.
Some people don't learn. When designing optimizing compilers, you need to try it out. Compile the compiler with optimization on and off to see if you get workable results.
IBM found this out back in the 60's when working on their Fortran H compiler. It was written in (of course!) Fortran, and compiled with "OPT=2". Then tested again to see if it made the same compiler. They did a bunch to get it to work. Sometimes it would "optimize out" complete sections of code it thought didn't do anything.
One must ALWAYS realize that any optimizations are fickle things.
> IBM found this out back in the 60's when working on their Fortran H compiler. It was written in (of course!) Fortran, and compiled with "OPT=2". Then tested again to see if it made the same compiler.
This actually happens when you do a full GCC build from source with the provided build script. It uses -O2 level by default, and builds the compiler 3 times: first with the compiler you already have on your system, then with the just build compiler, and then again with the second generated compiler, and compares if you got the same result from the 2 and 3 stages. See https://gcc.gnu.org/install/build.html
The only problem here is building a compiler is not in itself a complete test (the GCC sources also include a separate test suite). There are many language features it does not exercise. For example, compilers perform very little floating point calculations. One C compiler whose internals I know does not use any floating point at all. All compile-time floating-point arithmetic is performed using a portable internal representation implemented in terms of integers, to ensure the result is totally independent of the host CPU it is running on.
Way back in '86 I moved into IT working with what had recently been renamed from Marathon to Informix (much better name than Snickers!).
The system I started on was the older, non-SQL version. The application environment had a two classes of products. One class had serial numbers and individual rows in a table (or records in a file as the older Informix terminology had it). Assuming the warehouse hadn't misplaced any, stock levels could be assessed on demand by counting* the rows with the status code showing the individual items as being in the warehouse. The other class didn't and stock levels were assessed by dead reckoning between stock-takes. The code for determining stock level was a bit of C that looked at the product type and then did the appropriate bit of calculation. There were some complications around bill-of-materials issues in that some products consisted of assemblies of other products.
Eventually the older Informix version was to be replaced by SQL. A colleague wrote some YACC & Lex stuff** to do the grunt work of replacing much of the old-style C with the appropriate SQL versions. It left a good deal of manual tidying up to do and more complex bits, of which there were few but included the stock level calculation, had to be done by hand. I did the stock level by a somewhat convoluted bit of SQL involving NULL and the consequent 3-way logic. This ran successfully for years and survived migration to Informix on VAX, the reason for the SQL conversion, and a hasty retreat back to Unix, now on HP and Informix 2 to 4 and conversion of most if not all from C to Informix 4GL.
A little while after I'd moved on into freelance I had a gig with Informix which involved a bit of hand-holding with the old firm when they migrated to a bigger box and Informix 6 or 7. The actual cut-over date was after I'd finished the gig but I dropped in to make sure it was all working properly. I was told that in the end they'd discovered that in the entire suite there was one problem . I looked at it and recognised it at the SQL stock level calculation I'd written 10 years earlier. I looked again and realised there was an error in the 3-way logic, easily corrected. But for a decade this had run doing what I meant, not what I'd said. The really odd thing was that the bloke who'd taken my old job when I got eased out was convinced that the SQL was correct and there was a bug in the SQL engine. If there was it must have been that they'd dropped the telepathy function that worked out what I wanted it to do.
* At this remove I can't remember whether the C library used to access the database had the equivalent of the SQL COUNT or whether it had to be hand coded.
** I'd kept a copy of that. As the old ALL-2 library Informix wasn't going to be useful fro Y2K I hoped there'd be a market for last minute conversions. There wasn't.
You'd think in an tighltly controlled industry such as aviation someone might have noticed critical documents not showing up in a maintenance system during test procedures. But then this is IBM and Beoing we're talking about so I guess testing was an afterthought shoehorned into some pointy heads MS Project plan.
Yup. There's a reason why these days we try to have a "Production Acceptance" testing phase: it worked in test, it worked in Dev: now we've deployed it, we need to know it definitely, 100%, does what we think it will.
Sometimes there just isn't a way to do a full-scale test before deployment.
My first ever job in the real world was to find why code for processing sonar data, running on a PDP-11, kept crashing with a stack overflow. Always a stack overflow, but not always after the same amount of time or processing. The code was written in CORAL-66, the UK MoD standard block-structured language (like Algol, Pascal or Ada, if that helps).
It turned out to be a compiler optimization bug. There was a statement which started "if A and B then...", where A and B were logical expressions of some complexity. Obviously, if A turns out to be false, "A and B" is false, and you don't have to evaluate B.
This was inside a double for-loop, which was executed for each sonar "ping" in the input data. The number of iterations of each loop depended on the input data. Exit from the loop took place when "A and B" was false.
We eventually found that the compiler had created code to evaluate A, and leave the result on the stack. However, thinking that it was smart, it had also planted code to exit directly from the loop if A was false - without popping A off the stack.
Hence the stack would grow, not always at the same rate, because it depended on the input data and on the interactions between simultaneous real-time processes. But eventually you would get a stack overflow.
A couple of years later I actually wrote an optimization pass for a Pascal compiler. Made damn sure not to make mistakes like that!
A good few years ago now I was working for a decent-sized airline and got pulled into their deployment of new aircraft maintenance software after the Technical Project manager got hit by a car - yes it does happen!.
Originally I was just involved in writing some of the the interfaces between the maintenance software and other airline systems but now suddenly I was replacing the guy who was a direct report to the Chief Engineer of the Airline, and was one of those engineers who had ended up fiddling with the IT on the side.
One of the last things he had being doing was loading the maintenance schedules for the new aircraft we had arriving at the rate of 2 a month. Now aircraft being complex beasts have maintenance schedules for literally hundreds of thousands of their component parts, based on half a dozen variables like the number of take offs or landings and the time in the air. Like most tens of millions of dollars of kit, even the most "off the shelf" order is full customisations for the Airline or their Leasing Company, so these service intervals actually become custom on a per-plane basis.
The "database" for these service intervals came as either crazy deeply nested XML files (before XSD's and programs to read them were the norm), or humungous CSV files - these had to be loaded into the aircraft maintenance system every time a new aircraft was due to be delivered. Unfortunately because of some of the customisations this file needed to be tweaked, with some of the service intervals needing changing or adding. What concentrated the mind wonderfully was that one line might be about the service interval of the Tea Boiler, the next about the Undercarriage, all listed with obscure part or sku codes with no plain language descriptions, and all I had to go on was the scrawled notes of my vehicle-embracing predessesor.
Normally I would have whacked the files over to unix and messed with sed and awk until I had a "known good" substitution program, but our BOFH being rather BOFH like refused to let anything come across the firewall from the Engineering LAN as he thought my predecessor was a bit a of a cowboy. (He wasnt, but being an aircraft engineer had a much finer appreciation of relative levels of risk than the BOFH).
So the only resource I had was a Win 95 machine with 4Gb of ram, that fell over every-time anything more onerous than an email client ran on it.
So to cut a long story short I ended up hacking together updated maintenance schedules on said PC with a freeware copy of UltraEdit (or a relation), that gobbled up the 4Gb memory at the drop of a hat, and diff'ing the files to death before uploading them to the aircraft maintenance system.
Then to make matters worse I was catching those same planes 2 or 3 times month to gallivant off and see the GF. Its been so long ago now that I think most of those aircraft have moved on, but I still get nervous every-time I see one of my former employers aircraft at a gate....
IIRC, if you give Win 95 that much RAM, it could only actually use about 3.5gb of it due to limited address space on 32-bit OSes.
Ultra-Edit's saved my bacon in similar situations more than once. Sometimes you just have to deal with unreasonable vast text files, life's just like that.
Windows 95 cannot cope with 4 GB of RAM. Just under 512 MB would be believable. Even that feels pretty far fetched for the PC platforms (chipsets, motherboards, RAM technology) of the Windows95 era. Apologies - this is not meant to ruin the otherwise excelent story. I just know for a fact that stock Win98 SE have a problem with 512 MB or more RAM, and the machines where this has become a problem have only come several years after Windows 98 has been dead.
I think the Win9x functional memory limit was lower. I bought a PIII 64MB system with Win98SE which came with a free upgrade to WinMe. I also bought two 128MB memory sticks to max out it's memory capacity at 256MB. WinME was crashing due to a memory leak, downgraded back to 98SE and it was too. I know they ran at 128MB without the memory leak, I don't know about 64+128. I installed Win2k on it and used it that way for 10yrs.
gave me wise words...
For safety critical applications always code directly in assembler, as compiler optimisations don't always produce the same objects from the same source. This would often introduce issues by itself, notwithstanding compiler bugs directly.
This may not be necessary for configuring fancy displays etc., but certainly for the underlying devices providing the controls it seemed like good advice in terms of evading compiler introduced issues (which are often very hard to diagnose, and then even harder to have resolved...)
"For safety critical applications always code directly in assembler"
Given the typical number of errors per line of code, and the improved ease of making errors in assembler, this is more likely to change your risk unpredictably.
The better answer is probably a better testing process.
That said, there have been moments where I examined compiler generated code as assembler for debugging problems. Sometimes a mixed strategy is best.
ICL documentation was, by and large, comprehensive, thorough and accurate. If there was a blank page for reading flow reasons, it would be printed with the words. "This Page Is Intentionally Blank". This assured the reader that nothing had been left out. It did, however, raise the philosophical point of what actually constituted a blank page.
Agreed: ICL documentation was generally pretty good, even if for some programs it was a case of build-your-own-manual. The 1900 PLAN4 manual was a good example. It came in a large ring binder and the original text was never reissued. To make it usable meant updating a new copy from (IIRC) 12 hefty amendment packs, in total nearly half as thick as the original manual. These were applied sequentially and must have replaced over 25% of the original pages because some amendment packs fixed previous amendment packs. Or so it seemed as you did it, since a fully amended manual was little thicker than the original.
However, what I came here to say was that I did once find an IDMSX error: in an ordered set containing more than one record type the manual said the set order would be strict key sequence regardless of record type, which is what we wanted. But what IDMSX actually did was to order the set by key within record type. Naturally, we raised a bug and then waited... and waited. Eventually the updated software and manuals were released - the fix turned out to be updating the manual to specify what IDMSX had always done.
I once flew from NY to Heathrow in a Continental Airlines 747 that was falling to pieces during the flight.
A row of seats just behind mine sheared away from the floor mountings, forcing the crew to have to relocate the passengers, and when the stewardess pressed the button to start the in-flight movie there were problems. She popped open the overhead inspection panel and was deluged in coils of celluloid, which she rammed back into place before departing the area to mixed laughter and groans.
I heard later that the airline had snapped up large numbers of planes from the recently bankrupt People's Express airline.
You had to travel People's Express once just to know what life was like as a boat person.
Stevie,
If you never flew Trajik Tajik Air in the nineties, you never lived. Their passenger planes were converted Soviet bombers, on the whole; there were still cloth and wire biplanes standing around on the runway at Dushanbe and Khojand, and the airline as a whole was so bad it wasn't allowed to fly out of the country.
Which meant that getting to Dushanbe required a sane airline to Tashkent, then a couple of hundred kilometers overland by taxi, and then, assuming that the pilot remembered his money to buy some jet-A, you got to fly.
At last someone has outdone my Aeromexico and Indian Airlines stories.
But, you know, many people have unknowingly flown in Soviet military aircraft. The helicopters from Malta to Gozo were for years ex-Soviet MI-8. Disappointingly Wikipedia doesn't seem to know this. It's the only civil aircraft I have flown in where you could open the windows the better to take photos.
So.....some RS6000 testing way-back-when missed some stuff.
*
In the last month, I've seen an agile "expert" and "evangelist" write this in an online forum dedicated to "agile":
*
- "We don't test to improve our product. We test to improve our process."
*
Yup....accurate quote!!! Huh???? We don't care about the product.......but it's fantastic if the "agile" process improves over time.
*
Maybe this explains some recent software failures in the aerospace business????
Back in the 90s USAREUR (U.S. Army in Europe) thought it would be cool to issues hundreds of RedHat-powered laptops to various commands in order to command and control a battlespace.
Problem is, America's G.I. Joes back in the 90s were just not that Linux-saavy when it came to using the damn thing.
Hell, the Army back then didn't even have a computer Military Occupational Specialty (MOS) and a unit was lucky to have one or two PCs to with them into the field.
Not only that, they declined to use DNS or DHCP servers on the [REDACTED] network they would be putting these things on requiring someone to go out and hand-program the IP address as well as copy over a massive HOSTS file to hundreds of these laptops.
Luckily, a certain Armored Division found one of its tankers who knew how to "operates the RedHat" and quickly deployed his unhappy hide across the entire exercise battlespace to do all of that for them.
"Here's the OPORD, keys to the Ford pickup truck, and 200 gallons in AAFES fuel coupons. Don't come back until the exercise is done. GO!"
I was that unlucky soldier. I hate Warlord Notebook and am glad they killed it off.
Recently writing an interpolation algorithm. Needs to be quite fast since it gets iterated a lot (reimplementing someone else's work to look into aspects of its behaviour). Finally get the first shot working, but far too slow. Bright idea about restructuring it, implemented. Now 40 times faster, but produces pretty glowing balls instead of an approximation to the input. Today's task...
"If it's a Boeing, I ain't going..."
Or rather the original "If it ain't Boeing, I ain't going" sticker is present on the guppy cargo plane (I think) presented in Airbus museum. While not blatant, the sticker was not hidden either. I thought it was cool to still accept that in a museum dedicated primarily to Airbus.
with my family, including my brother who is in the Air Force.
Halfway through the trip, he turned to me and said "I"m afraid of flying!". I asked him how the heck he could be afraid of flying if he joined the Air Force.
He pointed to the roof of the plane and said "I've seen the guys that work on these planes to keep them flying."
The accusations soon began flying, with some suggesting that Pete had somehow "added the complex diagrams and text" via his software.
--> For all of you, my brethren, who have been accused of so much that you would not and could not have done. Like the time you moved that mouse and somehow more porn than could be downloaded in a day 'just appeared' on a customer's machine, or the machines that somehow never worked right for the users after you touched them, even though you can never be shown how things don't work. Or how you somehow changed the content and layout of the menus in Word 97. Or how the user could perfectly verbally dictate documents to the machine, even though their computer had no microphone, no where near the necessary CPU or sound card, and no such software ever installed. And you only gave them the dial-up number of their ISP.
I think those discussions will be the ones that cause me PTSD flashbacks for the rest of my life. The things I could not possibly have done in any reasonable sense yet have been loudly and sometimes almost violently accused of. More so even than some of the pictures I've seen on people's machines.
I'll be over at the bar turning the top 5 shelves into stacks of empties. Don't normally drink but there's always a good time to try something new...
I had a friend on my computer course at uni who ended up doing process control in UK nuclear reactors...
I remember a C4 documentary on the telly in the '90's rumouring poor software, so I called her up.
She not only confirmed it, but said there was "far worse" going on!
If a 747 crashes, that's 400-500 people's lives, but if a reactor goes up.....
About 25 years ago there was a TV documentary series on the development of the Boeing 777. In Episode 3 (link below) Pratt and Whitney had designed an engine for it and Boeing’s Director of Engineering, Ronald Ostrowski and others, didn’t want to test the engine on a Flying Engine Bed, i.e., a powered aeroplane with the test engine attached. They reasoned that the data from the ground engine testbed and computer models would be sufficient and they could skip the usual FEB testing, thereby saving Boeing ten million dollars and time. Boeing’s John Cashman, Chief Pilot, disagreed and managed to get Boeing to agree to a FEB and the test engine was attached to a 747. On the third flight the test engine surged on take-off, a problem that was not detected by the ground engine testbed. A surge is not a very serious problem on one engine, but if two engines had been tested on a 777 body and both engines had surged, as was quite likely, it would have led to some loss of power at completely the wrong time.
The whole series is an interesting insight into aeroplane development. Episode 4 covers the computer systems and should have been titled ‘Nobody knew how difficult it would be’. Of course, everybody should have known how difficult it would be.
https://www.youtube.com/watch?v=esmbJjK0M7Y
... at Boeing following the Midlands crash. The FAA came in and did an audit of our manufacturing and QA processes. They were not happy (although the crash was due to unrelated issues). I was part of a team that cleaned up the the engineering/shop floor QA systems. The FAA came back, looked over our new systems and left pleased. Then, over the next few years, upper management proceeded to put things back the way they were.
It's a lot like that cat, pushing the glass off the counter. They stop and look all apologetic when someone is looking. And then they go right back to it.
MCAS seems to have been awfully implemented, to say the least.
But it makes you wonder why they didn't consider looking at airspeed as a reasonable safety check that stall was happening?
Increasing airspeed is utterly inconsistent with a high AoA and stall. Also, we do still have gravity. So unless the plane is turning (which gyros can let us know), or accellerating, AoA can be determined (as a sanity check) from the gravitational pull of Earth.
There's a few things in this article that warrant a bit more explanation.
'They were using RS6000s to do the maintenance from the screen, no permanent printed copies were used,' - this seems unlikely but perhaps the key word is permanent. The obvious question here is the cost and lack of mobility of the RS/6000 and its hefty CRT screen. It's more likely to be in a clean office environment, away from the less-clean, huge aircraft. I wonder if the software had annotation features too. "from the screen" seems unlikely for a lot of work.
'Pete told us, "it turned out [to be] an optimisation bug in the IBM C compiler used on the RS6000. It was overwriting registers that were being used to store local C variables when the call stack got too deep." ' - this feels like it needs an extra line or two of explanation, you can't normally just trample on registers. If this is about preserving values for the calling convention then why did it fail at a certain stack depth? That doesn't make sense. The POWER architecture doesn't use a register window, AFAIK?
As others (@Wellyboot and first Anonymous Coward replying to @Giovani Tapini) have pointed out, principally this appears to be a failure of testing to verify that a representative manual (or two!) was rendered with the same content of the originals. The most basic manual test beyond looking at page 1 is to go to the end and see how many pages there are in total. That would have caught this problem (' "After about 30 pages I reached a page where my Windows app showed more data than the RS6000 app" ') so again it feels like more explanation is needed. Perhaps repagination is an issue here but probably not as would they really want to change the page numbers?
@boltar, @svm, @MacroRodent and @Herby have some very pertinent points and sound advice that many of us will have forgotten. It's worth running tests both with an unoptimised and optimised program. This could uncover ultra-rare compiler bugs but more likely highlight bugs in code which is relying on undefined behaviour in the language which the optimiser happens to change. This is another omission from this article, whether the code was relying on languages features that had undefined behaviour.
I'd also highlight the value of compiling on different platforms and comparing test results even if you don't intend to use each target in production. This can be a useful and fairly cheap way to find bugs like this and others.
Reporting ' "I saw no press reports of bad maintenance." ' is a bit disingenuous. Does Pete scan all the press? Does he read the aviation trade press? Did the article author go back and look for this? Was this newsworthy enough for a journalist to cover? Were there any incidents are accidents linked to this problem, presumably not?