Good to see the wet-ware doing a splendid job here.
Now what about self flying aircraft under similar scenarios (assuming it is not the aircraft itself that was hacked)...
Airline pilots faced with hacked or spoofed safety systems tend to ignore them – but could cost their airlines big sums of money, an infosec study has found. An Oxford University research team put 30 Airbus A320-rated pilots in front of a desktop flight simulator before manipulating three safety systems: the Instrument Landing …
The synergy between the two will become part of the evolutionary process that leads to humans and AI becoming intertwined. Insert whatever dystopian scifi future you find fitting here.
Meanwhile, social media and addictive consumerism is programming the next generation of humanity, with the expectation that the data analytics used in that will eventually be guided by AI (assuming that isn't already secretly the case). And, these marketing and psyops strategies are now increasingly employed by political parties. We are so doomed.
Although if human intelligence was enough we wouldn't need GPWS, TCAS and autoland in the first place
You've never tried an instrument landing, have you? Many airports are so crowded it simply isn't POSSIBLE to safely take-off or land just by looking out the window. You HAVE to follow external instructions.
Yes, in a visual approach you may need to follow external instructions such as the air traffic controller in order to follow a route that will not conflict with other traffic, but you will not need any navigation instruments or aids such as ILS.
It is also possible to make a safe approach and landing without ATC by simply having all the pilots in the vicinity giving their positions and intentions to each other via radio. Which is what happens in the GA sector at uncontrolled airfield (some of which get pretty busy).
"Many airports are so crowded it simply isn't POSSIBLE to safely take-off or land just by looking out the window. You HAVE to follow external instructions.
Those airports are that busy due to those systems or they wouldn't be able to handle the load.
Not necessarily. These systems were developed to address specific conditions that are not always apparent.
A pilot flying a jumbo can't see directly below or above the aircraft. Also, two jets approaching each other have a very high closing speed, and thus not much time to react. Hence TCAS.
In poor visibility conditions, and especially if combined with a secondary instrument failure, a pilot might be a lot closer to the ground than s/he thinks. Hence GPWS.
Autoland was not created because pilots do not know how to land planes.
"Although if human intelligence was enough we wouldn't need GPWS, TCAS and autoland in the first place"
Having those systems means that aircraft can take off, fly and land in far more weather conditions. If it all went back to VFR with limited radar vectoring from ATC, tickets would be thousands in cattle-class and you might be waiting weeks for your flight to come or go from the "sunny" UK.
Scientists absolutely can and have explained how people make choices. The trouble is, people look at the explanation and say "that can't be right, what about my free will?" And so they reject the explanation on, essentially, religious grounds.
What makes this particularly frustrating is that many, if not most, of those people would stridently deny holding any religious convictions at all.
(Other linguistic formulas may be used. Sometimes the word is "agency", or "preference", or whatever. Sometimes it involves redefining "choice" until it becomes something that cannot be made by a machine. But always, the core objection is the same: "I don't want this to be true, so I will move goalposts as necessary to maintain my denial.")
Yeah...but...
A human wastes a huge amount of time researching and fixing minor problems. Machines will just invent time travel and send a machine back to a hackers childhood to kill the hacker before he can cause the glitch in the first place.
This method not only saves time, but is also a permanent preventative measure.
This post has been deleted by its author
In my understanding, the MAX planes were automatically changing trim settings with the faulty readings.
There were several news articles on this topic about how common was it. Turns out that the experienced pilots just shut the thing off (pulled the breakers) and then turned in problem report when they landed. I believe the more complex a system is, the more training is required including having an inexperienced pilot fly with one who is well experienced.
"Turns out that the experienced pilots just shut the thing off (pulled the breakers) and then turned in problem report when they landed."
The problem I thought was that, because Boeing were trying to do things on the cheap, they pretended that the MAX was a standard 737, so pilots didn't need a full course on it, including, quite probably, where said breakers were.
"There were several news articles on this topic about how common was it. Turns out that the experienced pilots just shut the thing off (pulled the breakers) and then turned in problem report when they landed."
The issue in certainly 1 of the crashes, i think the second, was that the pilots did turn off MCAS but the horizontal stabiliser was so far out it couldn't be moved back with trim due to the aero forces flowing over it, they turned MCAS back on in the hope it could move it back properly but that obviously didn't work either. One of the fixes for MCAS is to limit how far it can move the stabaliser so the pilots can manually overrule it.
https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Augmentation_System
don't blame the pilots regardless of how experienced they where.
First, this was only a simulation. Significantly less pressure than the real thing.
Second, the pilots were warned up-front that their systems would give bogus readings.
Third, the whole simulation was artificially constrained to "clear weather", so visual cues were still useful.
It's good that the pilots were able to cope safely, if imperfectly, with all these issues. But "Die Hard 2" it wasn't.
Hopefully the wet ware was unaware of the experimental objectives. If I know I'm going to face controls or nav issues, perhaps I'm more alert to the possibilities of borkage.
I was once party to a study where we had a number of operators doing adult-level work on a network-intensive system. Unbeknownst to them, they were participating in a study of human effects from a vicious simulated cyberattack. We expected to see mental stress, confusion, and human errors. Other than the obvious connectivity and functionality deficits, the operators just shrugged and carried on. I debrief we had to TELL them their had been an attack. Respone? "Really? No s__t! We just thought I'd was another fscking Windows problem..."
Moral: when your OSes and networks suck so badly that 'normal ops' looks like a cyberattack, your operators will be well trained.
Hopefully the pilots overhead are not trained in this manner.
The Wet-ware has the benefit of several million years if trial and error testing.
Those units that fail under potentially dangerous and/or stressfull situations are often discarded from the gene pool so the more successful types can lead to more advanced models. Machines on the other hand have only a few decades of development done by humans who as yet don't fully understand how their own onboard systems work.
Plainly there is some catching up to do.
Anyone who uses software is familiar with the need to get around the inevitable borkages, generally with some form of turn it off and on again (although hitting the device cannot be ruled out through these days it is less effective but still viscerally satisfying).
I know when to ignore my car's warning beeps and I assume pilots are the same.
On a side note, Neil Armstrong had to ignore Apollo 11's warning to abort the landing. "That's no small step for man, one giant cockup for software kind."
This post has been deleted by its author
This post has been deleted by its author
Wikipedia, admittedly:
"Shortly after Apollo 11, Armstrong announced that he did not plan to fly in space again.[161] He was appointed Deputy Associate Administrator for Aeronautics for the Office of Advanced Research and Technology at ARPA, served in the position for a year, then resigned from it and NASA in 1971".
You are in any case confusing the Moon landing with Gemini 8, which wasn't Armstrong's fault in any way and, obviously, did not affect his future career.
If you read Armstrong's career to that date you realise that he had survived many serious incidents. There was never going to be another first commander of the first manned Moon landing, or first person to walk on the Moon. Mars would not happen in his lifetime, certainly not in his piloting lifetime. He left at the absolute pinnacle of his career because there were no more mountains to climb and it would be stupid to get killed climbing a foothill.
Your post is, in fact, not even wrong. It's derogatory to Armstrong, and petty.
I've watched many a documentary about Armstrong's ability to fly the lander; he was the only one that flew one on earth without getting killed (he was quick to eject). THAT is why he was selected instead of Buzz. After the 1st landing, NASA would not allow Armstrong to continue in the space program, because they didn't want America's hero getting killed later on. The US did things like that in WW2, as it was considered a national interest to save America's heroes. The airmen that came back in the Memphis Bell B-17 bomber were not allowed to fly combat missions anymore as well. It was all a morale issue at the very least.
Air France 447 crashed due to the onboard computers giving up due to inconsistent data provided to it by frozen pitot tubes. The pilots became confused with the information given, one was pulling up at the controls while the other was pushing down. The plane crashed in to the sea.
It's all well and good testing this in a simulator, but I'd wager that given conflicting information to a stressed out pilot, it won't always be good.
Not quite again. The junior pilot completely failed to follow established protocol, training, cockpit procedures and common sense by continuously pulling back on his stick for the entire period from just after the first alarm until the crash without informing the other pilot.
That such levels of stupidity and incompetence can be displayed by a supposedly trained pilot is difficult to comprehend.
That such levels of stupidity and incompetence can be displayed by a supposedly trained pilot is difficult to comprehend
I suspect if you were on a night flight, tired, and then faced with the prospect of a terminal dive you might find it harder than you expect to fully recall your training.
That is not to absolve him for the failing, just to point out that trained wet-ware is not infallible either.
> I suspect if you were on a night flight, tired, and then faced with the prospect of a terminal dive you might find it harder than you expect to fully recall your training.
Firstly the pilots should not have been tired. The rules around rest periods are clearly defined and very strict. (I'm not suggesting they were tired, but the suggestion they might have been is a red herring.)
Secondly there was no dive. They flat stalled into the sea at high rate of descent. Ironically, if there had been a dive with the nose obviously down and them falling forward out of their seats they would probably have recovered the plane.
Thirdly they are absolutely required to recall their training. No excuses. Those that can't get extra training or the sack. They don't get to fly.
This doesn't mean that pilots have to memorise every possible emergency procedure - that's what emergency checklists are for. But they do have to decide who is going to fly the plane (as best as possible) and who is going to work-through the checklist. On this occasion they didn't and the result was a crash with no survivors.
IIRC the cockpit transcripts showed both pilots were at fault.
The transcript recorded the senior (2nd) pilot settling in his seat but did not record him saying "I have control" or the junior (3rd) pilot saying "You have control" in response.
This is the normal control handover dialogue which I'd expect any time control passes from the flying pilot to the non-flying pilot. Since there is no physical connection between the two sidestick controllers and no simulated control load feedback in that Airbus model, neither pilot could feel what the other was doing.
"Since there is no physical connection between the two sidestick controllers and no simulated control load feedback in that Airbus model, neither pilot could feel what the other was doing."
This is the thing that has always bothered me. It's possibly the only thing that I believe Boeing does better than Airbus. I have never understood why Airbus thought it was ok not to provide tactile feedback to the control surface items in the cockpit—meaning the rudder pedals and the sidestick.
Boeing decided to retain the traditional yoke, even when it too moved to fly-by-wire controls, and the yoke is not only big and very visible, it does allow the pilots to sense what the control surfaces are encountering and, critically, whether other control inputs are being made too.
If Air France 447 had been operated by a Boeing that sad night, everyone in the cockpit would have seen instantly that the pilot pulling back on the yoke was pulling back on the yoke—the damn thing would have been pressing into his belly. The sidestick, on the other hand, was virtually invisible and betrayed nothing of what the pilot was trying to do.
Possibly even worse, the other pilot's sidestick relayed nothing of what was being input on the other side of the cockpit: there was nothing to clue in the other pilot that his oppo was pulling back. In a Boeing, the other pilot would have certainly noticed the wayward control input—he'd have felt it.
Furthermore: If American Airlines 587 had been operated by a Boeing on 12 Nov 2001, the co-pilot, who had been incorrectly trained to use the rudder enthusiastically to compensate for turbulence (caused by preceding large jets, sometimes experienced during climb-out) would have felt the enormous resistance of the rudder as it was fully deflected in a high speed airstream: likely he would have thought twice about doing this a second time, instead of continuing the action until the vertical stabiliser snapped clean off the plane—it was overstressed way beyond design limits by the pilot's excessive control inputs. (The plane crashed in Queens.)
Don't get me wrong. Until the 737MAX fiasco I would have said that Boeing and Airbus both built superb planes to very high standards, and I've never lost sleep over flying in one or the other manufacturer's equipment. But I do believe that the Airbus decision to move to sidesticks and to follow that with abandonment of feedback was bordering on the cretinous.
I cannot believe that any actual pilot supported that decision, because after years of training, starting in single-engine prop planes and working up through multi-engine turbines, plain old seat-of-the-pants airmanship would have ingrained the sense of "wearing" an aircraft, something that's only possible by actually feeling what it's experiencing and doing.
The Air France crew have been criticised for lack of airmanship, but spare them a sympathetic thought: airmanship is very hard when you can't feel the plane.
Some other complicating factors include:
Airbus flight stick gets no force feedback whatsoever.
"Dual Input" indication due to opposing Pilot/Co-pilot control inputs was not noticed (which reinforces the theory that AoA disagree indication would've done very little to prevent the MAX crashes)
Aircraft fell back into "alternate law" due to bad airspeed sensing (possibly blocked pitot due to icing), and therefore lost AoA protections.
Physical feedback is not something that pilots rely on. Not in an Airbus or Boeing or Embraer or Cessna.
Even in a small aircraft, I learned right away to not confuse control input with results. If you're flying with the idea that "I'll just push and pull and my job is done" then you're dead.
The same principle works in a car, by the way. Do you maintain speed by monitoring the position of the control pedal? Do you stay in your lane by watching the steering wheel? If someone told you they crashed the car because their accelerator pedal lacked proper feedback about road conditions, would you blame the car manufacturer?
So how do pilots know when they are in control of the aircraft?
They communicate. "I have control."
They are aware of the other pilot: are they responding properly to communication? "You have control."
They stay aware of the situation: attitude, altitude, speed, direction, engine power, nearby solid objects.
They use the controls and stay aware of how their inputs affect reality. I just pushed down a little: did the pitch change? Why not? Scan instruments again and outside. Push the priority takeover button and try again. Consider possible trim problems or even partial control failure.
"Airbus flight stick gets no force feedback whatsoever."
Why would a device with no physical connection give a force feedback, there is no wire in fly-by-wire as funny as it sounds, and that goes for Boeing too, unless they have added an artificial feedback which I doubt.
My joystick or my mouse have no force feedback either.
The guys with real nice force feedback was the Wright brothers and of course Boeing based on designs from the 50s but not the Dreamliner or any fighters since WW2.
All current airliners have two sets of controls that are active at the same time. The pilots are supposed to confer and decide who is "pilot flying" (on the controls) and who is "pilot monitoring" (checking the instruments, navigation, radio, etc).
Boeing and Airbus are different layouts:
On Boeing each pilot has a huge control column in front of them, on Airbus each pilot has a "sidestick" mounted at the side of the cockpit.
Both have foot pedals under the dashboard for rudder controls.
They are both fly-by-wire, and the column/stick input is based on the force the pilot is exerting, not the distance moved (unlike a PC joystick).
They have force-feedback systems so the pilots can "feel" how the aircraft is responding, though presumably the feel of each is somewhat different.
Some have criticised the Airbus design because each pilot can't necessarily see what the other is doing with their stick, and remind each other who is supposed to be flying the plane.
A go-around is expensive, with airlines racking up bills for extra landings, fuel and delay penalties.
I may be a bit out of date here, but any airport I ever worked at only charged the aircraft for one landing if they had a go-round. (Circuit training may have different rules.)
Pro aviation tip: Remember that the number of take-offs should always equal the number of landings.
"A go-around is expensive, with airlines racking up bills for extra landings, fuel and delay penalties."
It's a terrible idea to pressure pilots to not to a go-around if they feel it is justified, simply because it's expensive. However expensive it is, the airline should suck up the cost, and if necessary fix their systems / procedures / equipment accordingly.
Since commercial air traffic became a thing in the late 1940s and 50s, the industry had to deal with countless teething issues. Early planes didn't have redundant control systems, or sensors to measure almost every conceivable parameter, whether for consumption by the pilot or for the FDR. Pilots also didn't have to file a flight plan, or stick to a specific route. That led to an infamous 1950s mid-air collision above the Grand Canyon, when a pilot decided to give his passengers a good look of this natural marvel.
Over the past decades, air travel has become immensely more safe. Every incident and every crash led to improvements and more knowledge that improved airplanes, radar systems and so much more. Some lessons were hard-learned, such as TWA800, and the countless crashes due to sudden downdrafts.
What hasn't changed much, however, is the human inside the cockpit. Aside from that today's generations of pilots are unlikely to have served in the air force, getting most of their experience flying above the battlefields of WWII, Korea and Vietnam. Instead it's mostly about the training that these new pilots have received, which unfortunately doesn't always suffice, as was learned with AF447, where the PF (pilot flying) got confused in the dark over the Atlantic Ocean when he got handed back control by the board computer due to frozen pitot tubes returning conflicting readings, managed to yank on the controls a few times, get the airplane into a left-banking, oscillating turn, nearly stalled the airplane a few times, got confused by the stall warnings before getting the airplane into a proper stall and having it drop out of the skies into the ocean.
Such cases of pilots managing to wreck perfectly fine airplanes for no good reason are sadly becoming a large part of today's crashes. In large part this seems to be due to either the pilot becoming confused and losing his sense of orientation, not trusting the instruments, or becoming overly focused on a single, often irrelevant, detail while ignoring the issues that will kill them in a few moments. Like the captain who insisted on debugging the lights for the landing gear while circling around the airport, until his plane ran out of fuel.
Systems like TCAS, ground and stall warning and ILS are there primarily to assist the pilot, but they're there as suggestions, not as commandments, and it has been decided that the pilot ultimately remains in control. As modern day crashes and incidents show, this is both a positive and a negative thing. Unfortunately, both human and machine are still flawed in the end.
There's been a lot of research and studies by NASA and others on cockpit behaviour, which has led to improved use of checklists and much more. Everything from social interactions between the captain and co-pilot, the adherence to protocol and the dealing with unexpected events all can become a single link in the chain that leads to an accident.
Here I find a phrase that's often uttered in professional pilot circles quite useful when thinking about the right thing to do in a situation as a pilot: 'How would this look in the NTSB report?'
@Elledan- awesome post. Needs an upvote and a pint for thoroughness.
Human factors are hard. That's why my previous life we said there were three ways to do things: the right way, the wrong way, and the Marine way.
The Marine way involves drilling your man senseless, then let him recover enough to hyper focus on task. You break complex tasks into smaller and smaller and smaller tasks. When the task is small enough, you assign a man to do it, trained and drilled until it can be done to perfection. That's why you go on a warship and you will see men and women physically watching gauges. SCADA? DCS? We've heard of it. Industrial automation? Meh. Marine attitude is that if I want a valve opened I want a human to experience the flow. Does the pipe vibrate? Smell different? Sound different? I can automate that, but how do you cover all the eventualities?
My problem is that I have to the faintest idea how to strike a good compromise between cost-efficient, minimal staffing (airlines) against my urge to throw a bunch of people at a problem.
This post has been deleted by its author
Came for the DH2 reference, was not disappointed
Also, in the future airplanes will only need 1 pilot and a dog. The dog is there to make sure the pilot doesn't touch the controls, and the pilot is there to feed the dog.
"a combination of over-speed alarm combined with a stall warning."
Confusing indeed! That's beyond my training, but I believe you normally level the wings and revert to flying "pitch and power." Whatever it's currently doing, an aerodynamically stable craft (like a passenger plane) will settle into level flight if you give it moderate engine power and trim it to fly slightly nose-up. Pilots are drilled to know the precise settings for their aircraft.
The first paragraph mentions the pilots' actions could cost airlines lots of money but then doesn't fully reference how any of the action taken would result in that. I suspect the goarounds is what they're referring to, which implies the author of the article / study thinks it's safer to continue with an approach where an alarm is triggered rather than going around to take time to check systems, etc. It's that cost first attitude that results in things like the Max and I would not want to see any pilot penalised for taking a safety first approach.
Hi - one of the authors here! Just want to clarify something on this as I think this is quite an important point.
The point about cost is a little bit more nuanced than that. We don't suggest that the cost should come into the pilot's decision at all. They should do what is safest in their judgement, and indeed our participants consistently did this.
We used cost in the paper as a way to explain why the kinds of disruption that these attacks cause actually matters - it's easy to write off attacks which don't have some kinetic effect as unimportant, but we believe this is not the case. The line of thinking is that sure, you can't straight up crash an aircraft with the GPWS attack, for example, but you might be able to force the pilot to cause missed approaches. In turn, this has a real economic cost which needs to be accounted for. This is a cost which you may be able to either preempt or remove if you come up with a way to safely mitigate the attack.
If you're interested in more of our analysis on this, we cover it quite extensively towards the end of our paper - or feel free to get in touch with one of us by email or on Twitter.
We don't suggest that the cost should come into the pilot's decision at all. They should do what is safest in their judgement, and indeed our participants consistently did this.
In that case you didn't include pilots from airlines where it does come into the pilot's decision as some airlines include those costs into performance reviews and even attach consequences to it. And in case you are interested, those are the same airlines that got fined for consistently skimping on fuel load (insufficient margin for a holding pattern at first alternate airport).
On the other hand, that is already in the article as the most notorious (one could even say infamous) of those flies exclusively with B737s.
Good point, though I should clarify that the first sentence is referring to the fact that in the paper, we did not argue whether pilots should be making cost based decisions either way. I mainly wanted to say that we were instead highlighting how the security of systems has consequences beyond the obvious ones.
I'm sure some airlines do have pilots factor this in, but it's not something we built into our analysis as it is hard to get reliable information on pilot performance review. Not to mention participants who would be happy to put their name to it!