The immediate action taken was to remind operators of what to do in the event of such a triple failure
Hit the brakes hard?
Airbus is to implement a software update for its A330 aircraft following an incident in 2020 where all three primary flight computers failed during landing. The result was a loss of thrust reversers and autobrake systems and the pilots having to use manual braking to bring the aircraft, a China Airlines A330-302, to a halt …
> Now we really need to know how it is that a triple system went haywire like that
What? You mean something like perform an investigation and root cause analysis, produce a report from that investigation, undertake rectification work to fix the root cause, and to issue a work-around while that rectification work is undertaken?
Something like what this article is reporting on? It links to the report of said investigation, and even provides a summary of the root cause:
"The root cause was determined to be an undue triggering of the rudder order COM/MON monitoring concomitantly in the 3 FCPC. At the time of the aircraft lateral control flight law switching to lateral ground law at touch down, the combination of a high COM/MON channels asynchronism and the pilot pedal inputs resulted in the rudder order difference between the two channels to exceed the monitoring threshold."then provides the timeline for the rectification to be complete (i.e. releasing a patch) and the work-around - "The immediate action taken was to remind operators of what to do in the event of such a triple failure." .
Seems to me they already know "how it is that a triple system went haywire like that" and have an incoming fix - which is the entire point of this article.
This is what Boeing was doing when the second 737 MAX, with pilots fully informed about the entirety of the MCAS system and detailed memorization list of "Do Exactly These Things In This Exact Order" crashed.
Every pilot was reminded that if trim is too much use the wheel mounted, thumb activated, trim switch. If the trim goes when you don't want it to, use the trim motor disable switches after getting the plane back in trim with the trim switches.
Depending on pilots doesn't work anymore as ET302 and PIA 8303 showed very clearly.
Being's advice following the Lionair crash was to turn it all off and use the manual trim wheels. What they didn't tell anyone (because they never tried it themselves) was that the trim wheels would be immovable if the aircraft got too far out of trim.
Thus Boeing presented an unworkable work-around, and tragically it was found to be unworkable by the Ethiopian pilots. That's why they turned things back on, to get power assistance back, only for MCAS to send them to their doom. There was much unjustified criticism of these pilots for not throttling back, something that would itself be fatal at low altitude; that itself would cause an additional nose down pitching movement with no height in which to lose speed before impact.
That's how badly Boeing handled this one.
However, Airbus's work around is valid, and leaves the option of a go around as required.
"when the second 737 MAX, with pilots fully informed about the entirety of the MCAS system and detailed memorization list of "Do Exactly These Things In This Exact Order" crashed"
That never happened. MCAS incidents greatly outnumbered the crashes. The crashes only occurred when junior pilots without sufficient experience and training failed to override MCAS.
There was no lengthy checklist to be memorized, nothing complicated to do, just a pair of cockpit crews who weren't up to the standard Europeans and Americans would expect from airline pilots.
They didn't need to retrain. They haven't had to retrain. The MAX is back in service without any retraining.
There was a minor problem with one system, which caused two crashes due to inexperienced cockpit crews. There have been no major changes to the plane made as a result.
They didn't need to retrain. They haven't had to retrain. The MAX is back in service without any retraining.
There was a minor problem with one system, which caused two crashes due to inexperienced cockpit crews. There have been no major changes to the plane made as a result.
So basically you are saying that Boeing took the plane out of service for a year and did nothing?, elimination of the system's ability to repeatedly activate, and allowing pilots to override the system if necessary. Boeing also overhauled the computer architecture of the flight controls to provide greater redundancy.
Ans yes the MAX now requires retraining which was the big issue in the 1st place since it was sold as an upgrade to the 737, but no mention was made of the new systems
Sounds like the usual explanation for a triple failure: all three systems were identical so they had the same failure mode, one after another, when a very rare but physically possible event occurred.
There is a known fix for that: three independent implementations, by three engineering teams who do not talk to each other. All three certified, independently.
Three times the development cost, but probably a million times less likely to experience triple failure.
I am disillusioned. I thought aerospace designers knew all this.
AFAIK they are not identical in construction or programming - the several developers are given identical specifications so they act accordingly, which was the failure mode here.
Simon says "Do this" and they did it.
The underlying specification is flawed and that likely means for more/most of the Airbus fleet, since commonality of performance via computer interference is the main selling point.
You are correct. It is the HAL conundrum. They forgot to cut too fine a point on it. They should have paid more attention to their triggers instead of going, "good news, we solved it," and applying the code three times in a row; if that's what actually happened. Nothing is effectively redundant unless it is purposely made to solve the requirement in a unique manner. That being said, I think perhaps a little external monitoring of crosswind conditions on the aircraft might go a long way toward giving the algorithms sufficient data to render the system effective.
"The trouble was a faulty supposition."
Yup, this happens a lot everywhere. The system performed to spec. The problem is that the spec was wrong.
It's the primary reason for cost overruns in various contracts and "goalpost shifting" - but in that case the contractors usually know in advance it won't work as designed but say nothing because they can charge extra for making it work
In some cases I've seen contractiors refuse a job because it won't work, so CTOs shop it around until they find a company willing to take it on "as designed"
There is a known fix for that: three independent implementations, by three engineering teams who do not talk to each other. All three certified, independently.
I believe that's standard practice in the aviation biz...in fact the idea covers more than just not talking to each other and goes as far as not having talked to each other before.
I worked with a project manager who had worked in that sector in a previous life. He talked about the problems he had building teams as you needed to make sure that the people in team 'A' had different backgrounds from team 'B' - if you had members from different teams who had worked together in a previous employment then they could have shared bad habits which they brought into the teams they're now going to be working in.
A bit of a headache, when you need to assemble not just teams 'A' and 'B', but team 'C' as well.
"Three times the development cost, but probably a million times less likely to experience triple failure."
A million times less likely to experience the EXACT same failure, in EXACTLY the same way, at EXACTLY the same time, but an order of magnitude more likely to experience different errors/fail to talk to each other/disagree and ultimately cause no end of unrelated untraceable and unresolvable problems.
There is a known fix for that
What you go on to state is what I believe how Airbus Fly-by-wire systems were designed and developed for years - A few years ago I did come across an article that said moves were afoot to stop that, and also shift/increase offshore work, and that it was going to the regulators for review - not seen anything else since/not that I've gone looking either, but, the bottom line rules, so I'd not be surprised if there has been a fundamental desigh change
Except that 3 independant systems is not a fix for this sort of issue.You can have as many independant implementations as you like and it will not help with a problem in thw specification and that even with independant implementations common problems and common problem areas and secnarios are observed at relatively high rates, far higher than would be expected by assuming problems are equally as likely to appear in different scenarios/areas.
I don't know if these implementations were independant or not but its pretty clear that the issue was with the specification of the threshold not the implementation so your solution, which is widely understood not to be a solution but a mitigation, would not help at all.
...and another excellent argument for keeping those "stick monkeys" (going to remember that one) in the cockpit and not pushing fully automatic aircraft. Because the software looks to have quite a long way to go before I'd set foot in one of those.
Tesla take note.
That wasn't the problem though.
The computer got an input it didn't like. It threw a fault. That passed the input to the next computer, which also didn't like it, and threw a fault. And passed it to the third, and bingo! You've got the hat trick of fail.
The inputs were all correct. That's the big issue to me - legitimate, correct actions causing complete failure, because it's a bit of an edge case.
Even with 'AI' or humans sometimes, the unexpected, unanticipated require an almost instinctive reaction that is not in the book or program. Humans often can correctly make these reactions while 'AI' often fails. For 'AI' the issue is what scenarios has it not been 'trained' or programmed to handle. The proper questions to ask are:
What scenarios is not 'AI' not 'trained' or programmed to handle? (I had an engineering professor note often the best question to ask is the inversion of the sales spiel)
What is the fail safe mode for the device?
Can a human take full control of the device?
In aviation, there have been many cases were the skill of the pilots to handle an unexpected and dangerous situation almost instinctively have saved many lives. Sometimes these events are well known ('Miracle on the Hudson'), others like this are only found digging through incident/accident reports. I am not sure an 'AI' system would have been able to successfully resolve these situations in most cases.
Totally true, humans can beat AI, although the counter is that humans can put themselves into deadly situations that the AI might successfully avoid. It's the anti-lock breaks conundrum - a skilled driver might (but probably can't really) brake harder with ant-lock turned off - but 99% of drivers aren't skilled to that level.
"a skilled driver might (but probably can't really) brake harder with ant-lock turned off"
I hate the ABS systems in cars. I can stop a car much quicker without the interference of the ABS system. If one wheel slips then the ABS system then reduces the braking on the other wheel(s) to maintain steering control. But if the object is to stop you can go into a 4-wheel skid and occasionally release the brakes to nudge the car in the proper direction.
My strategy with ABS is to brake just shy of the system engaging which allows faster braking on snow and ice.
Bottom line: don't override the human that knows what they are doing.
The safety gain of the ABS system is not to cut a few meters from the stopping distance but to maintain traction while braking and to avoid loss of control. It's a safety device to reduce accidents, not to improve the handling of the car.
Your mind would do a better job for 99.999..% of the time but it's that other, unexpected moment that'll make you crash your car. The ABS is designed to handle that rare case.
But, being a dumb system, it has zero clue whether *this* is one of those times it is needed, or not. Or *now*, etc.
Anyone who can't slide a 4 wheel vehicle through a gap with the brakes locked solid or failed or on ice, shouldn't be on the road. The ABS can hammer all it likes, but we aren't stopping unless it lets some snow build up under the static wheels - which ABS won't allow.
Actual AI could stop the car faster, assuming it can switch the ABS, traction control, etc in real time. And that it hasn't crashed...
That sounds like a suicidal part of the driving test, if not for the individual student then for the driving inspector. Ah, 95% pass rate this week, shame the 5% smeared old John into the wall.
Why not accept that different drivers have different levels of ability and that automated driving aids can make the average driver safer. Maybe YOU are above average and don't need that help, but I can't help thinking of the survey that found, when they were asked to rank their driving skill, 93% of the Americans said they were better than average.
> 93% of the Americans said they were better than average
...and 7% were too modest to admit it.
Everybody (especially men) thinks he's a better driver than most, even some quite appalling drivers I know. Mostly pride and self-esteem, but then again they often see other people doing stupid things they wouldn't, and that reinforces their belief they are "above average": There is always somebody worse than you.
Yes, I think I'm an above average driver. Why do you ask?...
"I can't help thinking of the survey that found, when they were asked to rank their driving skill, 93% of the Americans said they were better than average."
There is no such survey. It's an old joke.
> There is no such survey. It's an old joke.
That might be, but it's still true. It's funny specifically because it's true*...
* "asked to rank their teeth brushing skills, 93% of [people] said they were better than average" is not funny.
I challenge you to try the Slippery Road courses here in Norway...
https://www.youtube.com/watch?v=QnzvsRZE8CQ
When your wheels lock they create too much friction, and heat up and melts the top layer of ice so that your wheels suddenly float on top of water cushions. No 'snow buildup' will stop that.
"Anyone who can't slide a 4 wheel vehicle through a gap with the brakes locked solid or failed or on ice, shouldn't be on the road."
This is a nonsense statement. I probably couldn't do this reliably, and I'm a decent driver. Evidence being a perfect driving record over more than 35 years of driving, average 50k km's/year in all weather conditions. Part of the reason I've never had an incident of any description is probably because I don't claim (or think) I can do things like this.
I managed this once, on an icy day a car pulled across a dual carriagway from a side road turning right and stopped short in the outside (my) lane. Not enough room to stop and a car paprallel to me on the inside stopping a lance change. I managed to slide behind the inside car and through the gap while braking missing both cars by about 6 inches.
Great - Hurrah I pass your criteria for driving on the roads.
The thing is that I know if I had to do it ten times there would be a farily serious accident at least once.
The level of skill to do that sort of thing reliably and consistently is way above my skill level and I suspect 99.99% of the drivers on the road. That wasn't the issue the issue was the driver who did something very stupid which created the problem in the first place.
"The thing is that I know if I had to do it ten times there would be a farily serious accident at least once."
There is a portion of the population who feel no one should be allowed anywhere near a road before passing (at their expense) something like the Washington state road test on no notice and little sleep. And that if you can't cut it and your job depends on driving, then you starve (widows and orphans be dsmned).
Bottom line: don't override the human that knows what they are doing.
The problem with that is is the majority of humans think they know what they are doing...
The reality is that the vast majority of humans don't!
Remember the universe if finite while human stupidity is infinite.
I call BALLS on that. I drive all year round in Norway, on any and all conditions.
Wintertime is the time of the year that I really wish I had ABS in my car. Icy roads aren't uniformly icy or wet, so grip with one wheel can be extremely different from what a wheel on the other side of the car experiences. without ABS you end up braking, correcting, braking, correcting, braking, swearing, correcting, swearing... with a bit more swearing mixed in if you need to stop quickly.
And no, you don't brake better if the wheels stop rotating. That just means they're now sliding on the surface and you've lost control of your vehicle.
On an icy surface this means your wheels are melting the top layer as it passes over, and creates a very thin layer of water to ride on. That's nature taking it's Friction toy back and ppointing its nose at you.
Yes, I know Petter Solberg does powerslides in his rallycar... and that the Finns and everyone else also does it. It's still loss of control. And quite a few of them go end over ass into the geography. It's a calculated risk for them. We try not to calculate that in on public roads. Those of us that don't intend to wreck our cars, that is...
I've been driving for 30 winters on icy roads now, and somehow I'm still alive...
Still want ABS, though.
Luckily, "AI" is not (yet) in control of these systems.
However, it is also worth mentioning that a significant number of "recoveries" are from upsets that would not have happened if automation had been used. This is another example of where it's "ok" for a humans to make lots of mistakes, but where there is a lot less acceptance if computers also make them (at a much lower rate).
What is more worrying is where pilots have flown aircraft into the ground (or nearly done so) due to instrumentation / automation systems being ignored when they believe "there is a fault with the computer", or when they trust what the computer is saying when other (redundant) systems indicate otherwise. This is really down to a failure in training and not primarily an issue with "humans".
> it's "ok" for a humans to make lots of mistakes, but where there is a lot less acceptance if computers also make them
Indeed. To err is human, that's established, on the other hand computers are supposed not to make mistakes, that's their whole point. A machine which randomly does not accomplish the task it was built for is considered broken, be it a fridge or a flight computer.
Now a flight computer is indeed supposed to catch bad pilot decisions and generally take most of the load off them - But that task requires it to be irreproachable and faultless. Human pilots supposed to rely on it should never find themselves in a (usually emergency) situation where they have to wonder about the computer's decisions.
"In aviation, there have been many cases were the skill of the pilots to handle an unexpected and dangerous situation almost instinctively have saved many lives."
Unfortunately, there have been many more cases where the pilots were unable to handle even comparatively benign situations and killed many lives in the process.
The reality is that the meat bag at the helm is still responsible for some 80% of all air accidents. Eliminating the Human Factor from the cockpit would result in a huge increase in the safety of flying.
"Unfortunately, there have been many more cases where the pilots were unable to handle even comparatively benign situations and killed many lives in the process.
"The reality is that the meat bag at the helm is still responsible for some 80% of all air accidents. Eliminating the Human Factor from the cockpit would result in a huge increase in the safety of flying."
MANY more?!? Please provide the evidence for your opinion and a source for the 80% number.
Also, realize the conclusions of many accident investigations attributed the cause(s) to pilot error because management concluded the general public had to be made to believe the machine is infallible or they would be too fearful to ever get on one.
Paris Airshow some years ago. (80's or 90's I don't remember)
French test pilot showing off an early Airbus model.
A low, slow fly past turned into a 'perfect landing' in the forest as the computer decided that flaps down, gear down and approaching the ground meant 'land'.
Farnborough investigation concluded 'Froggy showing off' was the cause.
There are those who argue France wouldn't let Airbus be blamed.
Pilot error is the cause because the pilot is responsible. If the automated system goes wrong, the pilot is making an error in leaving it active, if the instruments fail, the pilot is making an error in trusting them, if air traffic control route them onto the wrong runway, the pilot is making an error in listening to them.
It wasn't in Paris, it was in Strasbourg.
And it was more than the plane thinking it was on the ground... the type was brand new, the pilots were fresh out of the type school and forgot to take into account how long it takes for the turbofans to go from idle to full throttle... while being too low...
The last plane that crashed in LBG ( Paris Le Bourget ) is the Soviet Era Concordsky.
As already noted - it was a while ago and recollections may vary [so AC in case I am totally misremebering this - and life is too short to go look it all up again].
I thought the pilot was demonstrating the alpha protection on a go around (e.g. performing a landing, but then deciding to go-around, so push throttles forward and pull back on the stick hard, leaving alpha protection to avoid stalling). Unfortunately it was left too late and the aircraft continued to sink and went into the woods.
I recall that there was some criticism that the system was slow to spool up the engines (i.e. pilot pushed the throttle forward, but it was some seconds before the engines responded). I believe this was judged to not be a computer fault, but a natural consequence of the physics of engines (they take time to spool up). The automation was getting the best out of the aircraft within the constraints imposed by the laws of physics (plus some safety-margins) - sadly it was not enough for the situation. The (...) point is an area where Airbus and Boeing used to differ [not sure how they compare these days] - Airbus following a principal of the system providing firm protections, Boeing following a principal of "give the pilot what he is asking for" [CAVEAT: greatly over-simplified!].
For a nomal Go-Around the aircraft would normally be reasonably lined up on the runway and there would not be high obstacles in the way [though some airports may vary].
"I thought the pilot was demonstrating the alpha protection on a go around (e.g. performing a landing, "
Nope. the primary cause was that the crowd was on the wrong runway (not the one briefed for) but the pilots decided to go ahead with the flyby visually anyway, having arrived and found everything assembled on the other strip
The trees at the end of the runway in question form a nasty optical illusion and seem safe until it's too late to pull up. If they'd briefed for a flyby on that runway they'd probably have been ok but the last second change of plan gave them no chance to pickup on the issue - which was already in NOTAMS
This one was definitely pilot error - they shouldn't have been over that strip in the first place
Known in aviation circles as the 'Habsheim incident'. Automation system flipped into Alpha mode to prevent what it thought was an impending stall and wouldn't let the pilot power away. Pilot was found guilty of something the aircraft wouldn't do. The investigation was carried out by the BEA (the French equivalent of the British AAIB - Aircraft Accident Investigation Board). Unfortunately the BEA are very protective of Airbus and have been caught out in the past blaming pilots when the aircraft was at fault. There is sufficient doubt in the report to engage the possibility that the Habsheim incident was one of those times.
'he conclusions of many accident investigations attributed the cause(s) to pilot error because management concluded the general public had to be made to believe the machine is infallible or they would be too fearful to ever get on one.'
That would make sense if management had anything to do with crash investigation.
As the graphs at this site https://aviation-safety.net/statistics/ demonstrate, as aircraft have become more automated over time, accident rates have decreased.
Management doesn't get to decide what the cause of aviation accidents is and I have no idea why you think they might. What you say is completely untrue.
The FAA (which does get to decide what the cause was) states (here https://www.faa.gov/data_research/research/med_humanfacs/oamtechreports/2000s/media/200618.pdf) that 60-80% of commercial aviation accidents are the result of human error so there is some evidence. There is plenty more if you wish to look for it yourself.
" 60-80% of commercial aviation accidents are the result of human error "
Yup - and a lot of cases of "hero pilot saves plane" are the result of the same pilot making a boneheaded judgement earlier, pressing on regardless and putting the aircraft into a situation which required "rescue"
You don't fly on when running low on fuel, you don't ignore thunderstorm warnings, and you don't ignore microburst issues as three examples of pilots "pressing on regardless"
The problem with your logic is to assume that a machine would on average make less mistakes than a human, which simply is not the case. Humans regularly take over where machines are known to fail. If errors occur at that point, it does not automatically speak against humans and for machines.
I don't think there is an inconsistency between machines making fewer mistakes, humans taking over when machines can't cope, and humans being _responsible_ for a subsequent mishap.
I agree that in a catastrophic case where a human takes over in extremis it would be unjust to put the final consequences on them. However, in flight operations the pilot (and processes that require pilot action) has an important place in maintaining flight safety, and pilots (through their training) are expected to deal with a number of situations. This is not without risk - as is recognized by the development of CRM as a means to improve safety.
The involvement of humans in the process is most obvious where some degree of "authority" is needed (e.g changing flight level or course). Automation could be involved - but is typically restricted to an advisory role. The 2002 Uberlingen mid-air collision is an interesting example of conflicting rules regarding the application of TCAS advisories.
Tragedies such as Uberlingen are investigated carefully and a great deal is learned from them. Sadly these lessons can fade as the old engineers who lived through the event move on, and young engineers are not familiar with them. It seems that the study of these incidents is mostly limited to specialist courses concerned with safety - rather than as a fundamental part of engineering [though there are some very good engineering courses that do cover the topic, though Computer Science/Software Courses seem to be lagging].
Automation has removed the issue of Pilots making mistakes for the majority of an airliners flight. They're not manually managing engines, hand flying the aircraft, transferring fuel, etc. etc. hence the decline in the accident rate over time as automation has increased. So yes on average humans make more mistakes than airliner automation.
The issue now is that when the automation reaches a situation it can't handle it hands it over to an under-aroused pilot who has to rapidly get up to speed with what's going on. Effectively a situation has been created where humans are monitoring computers in case they go wrong, which is not something humans are good at. It is however much safer than letting humans fly themselves as illustrated by the ever improving accident rate here: https://aviation-safety.net/statistics/
The "meat-bags" who "cause" avoidable accidents are usually blamed because blaming all the other organisational (bosses who bully crews into working to the max for the least pay/rest) or systemic (broken equipment, faulty 737-MAXs...) is much, much harder to do, especially as the airlines and regulators are utterly married to their bottom lines over everything else.
Pilots are the LAST LINE OF DEFENCE in a an extremely complex and hazardous transportation system that only pays for advancements in safety when a huge pile of bodies forces them to.
The "meat-bags" who "cause" avoidable accidents are usually blamed because blaming all the other organisational (bosses who bully crews into working to the max for the least pay/rest) or systemic (broken equipment, faulty 737-MAXs...) is much, much harder to do
This simply isn't true. And hasn't been (if it was ever anything more than a crude caricature) for decades.
The chain of errors that cause any accident are studied - and management or processes are often blamed where appropriate.
When BA had that windscreen come out in flight, partially sucking out the pilot, back in the 90s, it was maintenance management that got the blame. Even though it was a mechanic who decided to check the bolts were the right size by eye - rather than look up the manual. This being because some of the bolts already installed on the aircraft were a size too small - from someone else doing the same thing before. And the report blamed unrealistic schedules and disorganised spares - the mechanic had to drive between 3 or 4 different dimly lit parts stores at 3am to get the 20-odd bolts he needed for the windscreen change.
A steward ran into the cockpit and held the pilots legs. He'd mashed his head on the side of the plane and so was unconscious - and the co-pilot managed to get the plane down on oxygen and with no windscreen, quickly enough that he didn't die of hypothermia either.
in a an extremely complex and hazardous transportation system that only pays for advancements in safety when a huge pile of bodies forces them to.
An air transport system that's so shoddily managed and ignores safety so much that in 2018 we had a whole year of commercial aviation where there was not a single fatal crash involving a large passenger aircraft. An industry that's I believe safer per passenger mile than walking, cycling or train travel. You might want to have a look at your preconceptions a bit and compare them with actual reality.
One very important thing to note here is that AI must, at all times, make demonstrably rational decisions. If not, humans will have to make a decision as to whether what the AI system is doing is correct, or whether it is out of control.
ISTR there was a case many years ago when a robot on an assembly line made an unpredicted move which killed an operator. IIRC the newsworthiness of the report was bolstered by pointing out that Asimov's Laws had been violated by the incident.
This adds to the consensus that AI must tell us which branch of its flow chart it is traversing. Flow chart? AI? I think that regulatory systems need to be imposed to insist that AI systems are not allowed to "wing it". They have, upon demand (and at any subsequent enquiry), to tell us exactly what they are doing, so that a human can take control if necessary.
"Even with 'AI' or humans sometimes, the unexpected, unanticipated require an almost instinctive reaction"
Or one could put some people who actually know how to fly on the development teams.
SW Dev: "Are there any instances where control inputs might change coincident with a switch from air to ground modes?"
Pilot: "Um ... yeah."
This would be in the Systems Developer's domain - not the Software Developer's domain (though a knowledgeable Software Developer might well spot issues like this in the Software Requirements flowed down from the System Design).
Unsurprisingly a great deal of effort is spent looking for inconsistencies and "worrying" about transients at switch-over points. The problem with complex systems is that unexpected emergent behaviours that are difficult to predict (but become all too obvious when they are triggered).
Before embarking on a cull of automation the improvements in the aircraft/air transport safety record over the decades should be studied. Today's aircraft (including the automation) have a far better record.
Automation is not perfect; neither are humans. Getting the right balance is a complex matter - especially when you are trying to eliminate extreme corner-cases in the Failure Modes Analysis (and adding to the system/processes may cause more problems than it solves).
Glad that everyone walked away from this one.
In aviation, there have been many cases were the skill of the pilots to handle an unexpected and dangerous situation almost instinctively have saved many lives
Sadly, there are also many occurrences where the opposite is true.
We should keep in mind there's no absolute safe system.
do they need a tech to plug into the aircraft and clear the fault or is it a turn off/on affair?
seriously though, there should be some system that can have a FCPC do a check and see if its safe to go back into the pool once, perhaps, a transitory issue has passed. if critical functions such as auto brake and spoilers are dependant on FCPC's being available then there should be something the pilots could do to have the systems try again.
maybe a big flashing green button could be hit to at least try?
Might be more trustworthy on an Airbus than a Boeing?
Once a flight computer goes offline, it stays offline until it is manually reset at aircraft shutdown. You cannot have a possibly rogue computer restart itself in a critical stage of flight.
When I was involved with the A320-200 some twenty years past, for interest I counted the number of computers on board the aircraft (where 'computer' was any system that took an input, did something with it and provided an output for another computing system) and came up with the astonishing figure of 143. Most of those were triple-redundancy systems. While reading through and absorbing the Airbus IATA manuals I identified a possible serious edge-case failure point in the software where in a given flight regime (otherwise known as an 'unexpected upset') the aircraft could fall back to Direct Law leaving the pilots with no way to control it. Unknown to me at that point was that Gulf Air Flight 072 that had gone down off Bahrain about two months previous had found that edge-case. I didn't feel good at all for a long time afterwards when I found out.
The real balls-of-steel award has to go to the Lufthansa A320 crew who, on running into a series of inexplicable systems failures that they could not clear, cold-booted the aircraft in flight. Yes, cold-booted it as in shut down every system on board, turned the battery off and started again.
WhereAmI? is no longer directly involved in aviation. High-speed motorcycle crashes and aviation medicals don't mix.
Prof Peter Ladkin (Bielefeld/Abnormal Distribution) has long followed (and published) on aircraft incidents involving computer systems and often links in to relevant material.
I wouldn't say I agree with everything he says - but he does dig into the matter quite deeply and one needs to don one's thinking cap.
Current blog appears to be here: <https://abnormaldistribution.org/index.php/category/aircraft-accidents/>
Prof Peter Ladkin (Bielefeld/Abnormal Distribution)
From the link supplied by @Stoneshop...
"... the ELAC rejected ADR 3 data because its AOA 3 input data (it was not frozen) differed from that being supplied to both ADR 1 and 2"
Two wrongs don't make a right, and the majority decision overruled the right
The real balls-of-steel award has to go to the Lufthansa A320 crew who, on running into a series of inexplicable systems failures that they could not clear, cold-booted the aircraft in flight. Yes, cold-booted it as in shut down every system on board, turned the battery off and started again.
I hope they wore their brown pants that day ...
If the pilots haven't been told this, they might be quite surprised!
A huge issue with a lot of these automated systems is that they seem to work, but aren't really, and then they turn the job over to the pilot/driver/random bus "operators" so they take the blame for the crash 0.08 seconds later!
The crushed blind judoka at the Paralympics a few weeks ago is a great example - both "bus operators" said they thought the man crossing at the crossing would stop when he saw the AI bus wasn't stopping! Except he had right of way and was, you know, blind.
Airbus's modification, which is targeted to arrive by Q3 2022 for the A330-200 and A330-800, Q3 2023 for the A330-300, and mid 2024 for the A330-900
That's a few hundred planes flying until Q3 2022 with a known bug that can shut down all flight computers during landing, a few hundred more until Q3 2023, and hundreds on order that won't be fixed until mid-2024. Cool, cool.
Have you any idea how long it takes to make a change to aircraft software? Airbus I know, Boeing I don't. It goes like this:
Company 1 writes the specification for Company 2
Company 2 writes the specification for Company 3
Company 3 writes the specification for Company 1
Company 1 writes the software to the specification provided by Company 3
Company 2 writes the software to the specification provided by Company 1
Company 3 writes the software to the specification provided by Company 2
Company 1 test the software written by Company 2
Company 2 test the software written by Company 3
Company 3 test the software written by Company 1
No overlap at all between the three software companies. The fix then has to be approved by the relevant aviation authorities before being rolled out to the aircraft. Meanwhile, unlike the Boeing MCAS or rudder hard-over problems, this is a known edge-case that is quite easily avoided.
In any software development:
The first thing you should do before writing the specification is to review what is required and how any changes will affect the rest of the system.
The first thing you do before writing a single line of code is to review the specification.
The first thing you do (or should do) before testing is to review the test specification.
Given the checks and balances inherent in this system, you'd need three incompetent companies for anything major to get by. One of the final tests by the regulatory bodies is to blind-test to ensure that all three offerings do the same thing in the same situations. In this case there was an edge-case failure point which no-one had foreseen.
The only reason I know about the triple system is because I had a very interesting conversation with a gentleman who, at the time, worked for BAe and was involved in Airbus software development.
I think I need to apologise for answering so many posts...
@WhereAmI
......you forgot about the widespread use of "agile".......
Agilists will tell you (over and over) how "requirements" or "specifications" are just so 20th Century.
No, no, no......what we need is a wall full of yellow stickies with "user stories" summarised on each sticky. Then we just choose the juiciest "user story" to start with......
Much better software quality.......and absolutely no end in sight for repeated scrums......much better prospects for long programming career too.
What could possibly go wrong?????
Easily avoided by whom? The AIs, which didn't avoid it correct it, and instead crashed? No, by the pilots, who up until the systems all crash won't know that it is about to do nothing, leaving them as a pair to "stand on the anchors" as verse thrust fails... with, according to this report, under 3 seconds notice, and a scant 10m before running out of runway. Which at landing speeds is ~0.03 seconds of delay. 0.06s slower and they'd have been 10m into a ditch!
As with all software, bugs are categorised. If a bug (a) happens only occasionally and (b) can be worked around then pretty much anywhere will give it a low priority and ship the fix as part of the normal update cycle.
They have found the bug, pilots have been made aware and can deal with it if it occurs. It’s not something that’s going to kill people if it does happen again (*cough* Boeing 737Max *cough*). As such it’s still safe to fly pending a properly tested fix.
"It was raining at the airport as the aircraft approached so the runway was wet (although still well within margins). The captain disengaged the autopilot at approximately 773 feet and continued the approach. The A330 touched down between 1,500 and 2,000 feet from the runway threshold"
I know very little about flight procedures, but shouldn't one land closer to the runway threshold when conditions on the runway are suboptimal?
It would seem they could have 1500 feet more to spare. Not trying to be an armchair pilot here, but is there someone more knowledgeable who can comment?
Every aircraft has a landing distance calculated on weight at touchdown. It touched down well within the _normal_ stopping distance for the length of that runway. There's also the possibility of wake turbulence from a previous landing which would make you want to land deeper to avoid it. I once got caught in the wake turbulence of a Boeing 737-200 while landing at Cardiff, Wales; it was not an experience I would ever want to repeat.
Ditto when in an Air Florida (that dates it) flight from Fort Myers to Miami. A PanAm 747 had landed in front of us (I was the only passenger as it was Superbowl Sunday and the Dolphins were playing).
The pilot said calmly,
"better hang on. This will get a bit bouncy"
He was so true. Thankfully, the pilot landed the plane very near to the threshold and hit the brakes as per the instructions from ATC.
The irony was that the aforementioned PanAm 747 was the one that I was taking on my flight to London.
never underestimate the turbulence that large objects can cause.
Boeing's problems began when they started positioning Automobile Industry types instead of experienced Aviation Industry executives in the top spots. Plus, they need to remove all but one token bean counter from their board so they stop focusing on the wrong aspects of their problems.
Boeing's problems began when they bought MacDonnell-Douglas and as part of the deal agreed to appoint McD-D board members to run the combined company. Prior to that, Boeing had been engineering-focussed from the top down. McD-D were profit focussed and the smelly stuff started sticking badly when they moved head office from Seattle to Chicago so the C-Suite were no longer in touch with the daily problems.
Actually, Boeing bought only Douglas Aircraft, the large body aircraft manufacturing portion of the company. McDonnel continued separately.
And it wasn't just Board members. Harry Stonecipher was moved from St. Louis to replace Phil Condit as CEO. A few years later, Boeing moved its corporate headquarters from Seattle to Chicago. I have to ask, which is closer to St. Louis, Seattle or Chicago?
Disclosure: I was a systems engineer at Boeing's Wichita KS facility at the time it all went down.
Nope. Boeing's problems started the day the merger with McDonald Douglas was announced. Within five years all the competent senior engineering management was gone. Within ten all the competent engineering expertise had walked out the door.
McDonald Douglas had a very long history of sleaze before the merger. Which has now infected Boeing from top to bottom. Now little more than a brand name. Like AT&T and IBM.
The actual issue here seems to be that a COM/MON issue that caused the main computer to fail was passed on to the first backup computer in the knowledge (well if the designers had thought about it, it would have been in the knowledge) that it would also cause the first backup computer to fail, and would then be passed on to the second backup and cause it to fail too.
The conflict of rudder operation between airborne and landing rules in the circumstances of a wet runway, and presumably some crosswind or a less that perfect three-point landing should have been foreseen and covered. Certainly shutting down a flight control computer due to incompatible input and sending that input directly to another computer which the same input would also shut down is poor design philosophy. The 'fix' should cover all situations like this, not just this specific one.
No, ironically it's a safety interlock that is designed to prevent thrust-reverse deploying in flight.
After this incident, there is a fair chance the reverse thrust will be available for deployment when in alternative "flight" mode with weight on the main landing gear.
There will always be people who are flying airplanes, even if there wouldn’t be any pilots onboard to fly the plane…. Because the automations that would fly the plane is still written by people.
Personally I don’t think that we will see big airplanes without pilots on board, there have been too many times where automation fails. Yesgg pilots aren’t perfect, they are still human. Same thing with programmers, it is the combination that works best. Is it perfect, no, no solution is.