Software bug made Bombardier planes turn the wrong way

(theregister.co.uk)

105 points | by sohkamyung 1425 days ago

11 comments

  • redis_mlc 1424 days ago
    I understand the bug, and it's one of the worst imaginable.

    Doing an unauthorized departure turn can impact terrain. At night, it may not even be noticed by the pilots.

    Source: commercially-rated pilot.

    • speeder 1424 days ago
      It wasn't because software bug, but this is how "Mamonas Assassinas" died, basically during a missed approach the pilot turned the wrong direction (but with the correct radius and all) and crashed into a very tall hill near the airport.
    • zomglings 1424 days ago
      Could you explain the bug here?

      It wasn't clear to me what the exact problem was from reading the article, just that it occurred under a very specific and uncommon set of circumstances.

      • NikolaeVarius 1424 days ago
        > "This issue will occur in departures and missed approaches where the shortest turn direction is different than the required turn direction onto the next leg if the crew edits the 'Climb to' altitude field."

        > The FMS may change the planned database turn direction to an incorrect turn direction when the altitude climb field is edited.”

        • zomglings 1424 days ago
          Thanks. The summary is actually very useful.

          What I mean, though, was I didn't understand what the bug was - not how it manifested.

          • mopsi 1424 days ago
            Airports have landing procedures, essentially a list of waypoints and altitudes. A landing aircraft has to fly that route down to the runway.

            Each list has Missed Approach Point, at which the list branches into two: landing or abort.

            If you are not ready to land at that point (too fast, can't see runway, previous aircraft still on the runway, etc), then you fly the abort part. Usually it tells to climb to a certain safe altitude and turn towards a waiting area for another landing attempt.

            These procedures can be flown manually, or activated for autopilot to fly. A bug made the autopilot turn in opposite direction to what's in the abort section.

            Here's a landing procedure for Helena regional airport: https://flightaware.com/resources/airport/HLN/IAP/ILS+OR+LOC...

            The narrowing beam is instrument landing system that guides you towards the runway. If you reach 4580 feet minimum altitude during approach, but can't see the runway, you must fly the abort procedure, which is drawn with dashed lines: climb immediately to 4700 feet on current heading, keep climbing to 9000 while turning to heading 021, then proceed north at 9000 on 336 degree radial from Helena radio beacon, and upon reaching waypoint WOKEN circle until further instructions.

            This bug could cause the aircraft turn southwest (towards mountains) instead of northeast (valley).

            At Helena, the bug would not reveal itself because the right turn is 114 degrees. If the procedure required to turn more than 180 degrees right, for example, 200 or so degrees right towards SWEDD, the aircraft would make a left (shortest) turn instead. Green should be flown, red would be flown: https://i.imgur.com/ojShQa2.png

            • redis_mlc 1424 days ago
              Note that you don't need hills for a problem.

              Many airports have construction cranes up to 200' high in the airport area.

              The US uses heliostats along the southern border (tethered balloons, with multiple guy wires) to at least 1000'.

              Airliners are moderately tall on the ground, so that's something else you can hit if near enough to the departure end on an adjacent taxiway or runway.

              There are illusions when flying at night that confuse your sense of bank, so without looking out the window, only instruments would indicate a wrong turn. Look away or get distracted, and you just hit something.

              In addition, airline pilots aren't test pilots. There is an assumption that systems are unsurprising from one second to the next, and that a checklist can be used if not. An unexpected turn at ground-level would often turn out badly.

            • nraynaud 1424 days ago
              I would add that the pilot might not fly the published go around procedure (hence mess with the altitude parameter) for various reason. There is an interesting video on YouTube where the pilot has to ask for a special procedure before landing because the published one would send him into a thunderstorm cell in case of go around.
          • mjg59 1424 days ago
            There's two components in a turn - the desired heading at the end of the turn, and the direction you should turn to get there. Setting the "Climb to" altitude appears to have cleared the turn direction information. In the absence of that, the computer will turn in whichever direction results in a shorter turn to the desired heading. This is usually what you want, but not always, so I can understand it taking a while before anyone noticed it.
          • NikolaeVarius 1424 days ago
            The bug is that the flight computer could set an incorrect turn direction for a go around abort maneuver/take off procedure.

            https://portal.rockwellcollins.com/documents/796122/0/OPSB+R...

    • lutorm 1424 days ago
      It seems pretty far from "worst imaginable". I mean, it's not like it goes into an uncontrollable dive.
      • redis_mlc 1424 days ago
        Passengers fear falling out of the sky, but that's a pretty rare thing with airplanes.

        A dive at altitude would give you time to react, usually minutes.

        Hills are invisible in the dark.

        Also, charted departure instructions are something you bet your life on, as well as your passengers. So if you don't trust the plates or FMS, you can't fly in IMC or at nite.

      • dehrmann 1424 days ago
        If you still have control surfaces, is a dive at sufficient altitude ever uncontrollable? Flat spins are way scarier.
        • redis_mlc 1424 days ago
          The main issues are:

          - dive with power causing overspeed

          - dive causing controls to lock due to transsonic shock waves

          - improper recovery from a spiral dive will over-G

          - MCAS-style confusion

          - bottoming out on a phugoid oscillation

          - unrecoverable spins, usually in jets

          But in general, if you unload the wings (reduce G), then nothing breaks.

  • dghughes 1424 days ago
    Four months ago there was a post of another article from theregister.co.uk about another bug. The cockpit/flight deck displays went blank if a Boeing 737 landed at any runway that was oriented at 270 degrees true.

    https://news.ycombinator.com/item?id=21991087

    • t0mas88 1424 days ago
      That sounds more dangerous than this one, but flying the wrong missed approach or departure is far more dangerous than any display/instrument failure because the crew wouldn't be aware of the problem. If the flight display fails you look at the backup instrument and keep flying, every crew is trained for that.

      But if the missed approach or departure procedure is wrong the crew has a high probability of not noticing it (this all happens in a very high workload situation). If you don't notice you can't fix it. What makes it worse is that this bug happens in situations whether the procedure requires a turn "the long way around", they wouldn't design the procedure that way unless it's really necessary. So there is a big chance there is terrain or an obstacle on the other side.

      Source: I'm a commercial pilot

    • kayfox 1423 days ago
      Oddly enough, I think the software in both cases is written by the same company: Collins Aerospace.
  • baybal2 1424 days ago
  • trhway 1424 days ago
    >Most bugs in airliners tend to be unforeseen memory overflows

    the 21st century, planet Earth.

  • rrmm 1424 days ago
    Another turn the wrong way issue that happened was on the early versions of the 737. It would cause the rudder to go hard-over to the other direction it was commanded. It happened so hard it was difficult (if not impossible) for a pilot to counteract it using the pedals (they would basically have to stand on the pedal).

    It killed at least 157 people. The culprit in this case iirc was a combination of a flaw in the hydraulic cylinder design with large temperature swings. The story of the guy who finally figured it out is a fun one.

    https://en.wikipedia.org/wiki/Boeing_737_rudder_issues

  • thePunisher 1424 days ago
    I keep noticing that more and more aviation and space missions fail because of software problems. It seems to me that the new generation of engineers are generally less competent or companies see software as an afterthought which can be outsourced to lower-wage countries.

    The Boeing MAX and Starliner come to mind, but the failed Moon missions by Israel and India are also examples of this trend.

    Cost cutting in software development is costing companies dearly. Boeing may even go bankrupt because of this.

    • Jtsummers 1424 days ago
      It's a hiring problem. They let go of the good and/or experienced engineers in the 00s, then replaced them primarily with EEs (as a computer scientist I was told I couldn't write code, only test it, at my first job, I did not stay long) with minimal programming experience. These were very compliant people, happy to do 60 hours or more per week (work harder, not smarter). They lacked the historical context of the systems they were maintaining/developing, and the experience to properly model the systems under development [0].

      This hiring problem is compounded by the oversight problem. The program managers are similarly inexperienced. Or they came from strictly a testing side with no concept of what software development itself entails (I've seen this a lot). So they aren't bad at managing requirements, they may actually be really good at it, but they absolutely fail to understand that software is a hard problem (especially when dozens of subcontractor are involved) that extends beyond just the technical problem, and to the communication and coordination problem. That's assuming they're experienced, USAF program managers for software (IME) are straight out of college history majors. DoD programs are scary.

      [0] Most avionics systems, in my experience, boil down to rather straightforward state machines. Understood this way they become much simpler to write and test. The hard part is hitting your timing constraints, but that's easier to achieve with correct-but-slow-and-maintainable code than with incorrect-but-fast-and-unmaintainable code. Inexperienced developers won't see this possibility, either by failing to spend time studying the requirements or failing to understand how to implement state machines at all.

    • cryptonector 1424 days ago
      (The MAX was not so much a software issue as an architecture issue (starting with insufficient redundancy). So that's not a good example of software causing problems for airliners.)

      There are two reasons why you can expect software to be more and more the cause of airliner safety issues:

      - software is eating the world

      - software is getting more complicated

      The first is a long-term trend now. Look under the hood of any automobile from before the 80s: no computer to be found. Look under the hood of any automobile from the past 30 years: computers abound. The reason for this is that many problems are easier to address in software than in hardware. Of course, you go from N hardware problems to some possibly smaller set of possibly simpler hardware problems at the cost of gaining a set of software problems -- but this trade-off usually pays off. In some cases this trade-off enables functionality that would be infeasible to create otherwise.

      The second problem is also a long-term trend: CPUs, systems, operating systems, and applications have all tended to get more complex. In embedded systems the trend has been less strongly towards ever-increasing complexity, but even in embedded systems things have gotten more complex.

      Whether the problem is less competence among today's programmers is hard to establish here. First, we need much more software, which means we need many more programmers, which means the quality of programmers you get probably does decrease, though then again, we do have more programmers overall as more people (competent and otherwise) are attracted to the industry. But more importantly, the increase in complexity of today's systems could very well be enough to make yesteryear's competent programmers incompetent today -- you can't really compare software development 40 years ago to software development today.

      (I object to this idea that lower-wage programmers necessarily can't be competent, though that isn't quite what you wrote. It's true that a lax process for outsourcing can mean you get less competent programmers, and it's probably true that higher GDP/capita correlates with availability of competent programmers. But it doesn't follow that there are no competent lower-wage programmers in India, say.)

      • thePunisher 1424 days ago
        The problem seems to me that management doesn't appreciate the need for competent (and therefore expensive) software engineers in critical projects like defense, aviation and space. The whole idea of trying to save a few bucks on something as critical as a fly-by-wire system or end-to-end testing seems totally ludicrous to me.

        Boeing has been cutting too many corners since the MBA's took over and started reorganizing things to maximize profitability. There's a good chance the company will fail because of this.

    • aidenn0 1424 days ago
      Don't underestimate the fact that there is a higher volume of software though.

      Many things that used to be done by analog computers or manually done by the pilot are now done in software.

      In addition, the Boeing MAX was a largely system design issue; the software was operating as-designed, and had it been implemented in hardware, it would have likely failed in the same manner.

    • londons_explore 1424 days ago
      Most modern systems aim to move all 'hard' bits to software.

      Theres no surprise that's where most of the failures occur.

  • abductee_hg 1424 days ago
  • 908B64B197 1424 days ago
    Turning off the feature doesn't sound so bad considering the CRJ-200 first flew in 1991, it took 26 years to identify the bug so I assume it's not used frequently at all.
    • throwanem 1424 days ago
      It's an avionics bug, so very likely the affected equipment is aftermarket.
      • MaxBarraclough 1424 days ago
        How's that? Are most avionics issues due to aftermarket equipment?
        • throwanem 1423 days ago
          I don't know about that, but it'd be a surprise to see an airframe of that age in commercial service that hadn't had its avionics upgraded, since that's a relatively simple way to gain new flight management capabilities. It'd also be a surprise to see so severe a bug go undetected for so long, if it was part of the original equipment.
  • parkovski 1424 days ago
    This reminds me of a meetup I attended last fall, they were talking about the Spectre/Meltdown issues. I asked the presenters if anything in chip manufacturing/verification processes had changed as a result of that and they seemed surprised.

    To me, when a software bug shows up in a critical system, that means you actually have a logistics bug. Airplane control software should not be allowed to have bugs. CPUs should not be allowed to have bugs. And OS's should not be allowed to crash (looking at you Microsoft).

    When one of these things happens, in my opinion the correct response is _not_ to just release fixes and workarounds and then say "we'll try really hard to not let it happen again." You do that, sure. But the first time you see airplane software malfunction, that means you need to change the way the software is written and released so that the whole class of issues will not ever happen again. You don't stop at a public apology, you don't fire the person that unintentionally wrote the bug. If you have to hire mathematicians to formally prove the critical paths of the software, you do that. If it costs 10x more to release bug-free software, oh well, you do that.

    All of these corporate people thinking they can save money by spending less on quality are extremely naive. You can do a financial analysis of this, but they're doing it wrong. Did you ever consider what the cost of a whole generation just not trusting air travel at all would be?

    • na85 1424 days ago
      >But the first time you see airplane software malfunction, that means you need to change the way the software is written and released so that the whole class of issues will not ever happen again.

      This is pretty good intuition but often a systemic change is not economically feasible. For avionics software at least, a rewrite of the software would likely have to be recertified from scratch before it would be allowed to fly.

      We do, however, have several different quality assurance programs in Aerospace that are supposed to address this sort of thing.

      Once you identify the root cause, the process found to be deficient is supposed to have a Process Owner who is required to create a preventive and corrective action plan to prevent a recurrence, with more severe problems requiring more robust action plans. Done right, the process owner is supposed to be empowered to make the changes that need to be made.

      These systems tend to be evolutions of ISO 9000 as pioneered by Toyota (IIRC). They are highly bureaucratic and soul-sucking, but they are also the least-shitty solution that's been tried.

    • nickff 1424 days ago
      Are you willing to pay 10x more for the product with that supposed extra reliability (100% vs 99.99966%)? Before you answer, you must remember that perfection cannot be proven ex-ante, it can only be assured.

      You should also keep in mind that real systems have fault modes aside from software bugs and hardware glitches, such as unanticipated edge cases and user error, which may dominate your actual failure statistics.

    • Veserv 1424 days ago
      You are correct, but airplane companies already do that for the most part and much much more.

      The difference in reliability between normal software and airplane software is so vast that "best practices" from normal software can not be applied to airplane software since that would be gross criminal negligence. To explain, in the 10 years prior to the 737-MAX problems there were 50,000,000 flights and software was not implicated in a single passenger air fatality. The average flight is ~5,000 KM which is ~4-5 hours. So, in ~250,000,000 flight-hours, there were two crashes due to software. A plane takes ~3 minutes to fall from cruising altitude, so we can model this as a downtime of 6 minutes per 250,000,000 hours which gives us an downtime of 1 in 2,500,000,000 or a 99.99999996% uptime (yes, that is 9 9s). In contrast, I think most software people would agree that AWS is high quality. The AWS SLA specifies a 99.99% uptime (1 in 10,000 downtime). So, by this metric, airplane software is 250,000x more reliable than normal high quality software.

      The point of this is that the standard for airplanes is almost inconceivably high compared to normal software. To think that they are incompetent or suggest that all they need to do is adopt X or Y common-sense/best-practice is a gross misunderstanding of what is being done and what needs to be done to improve. It would be like someone trying to tell a civil engineer making a 50-story skyscraper that they really need to adopt high quality wood construction techniques from makers of doghouses. To actually improve it, you need to consider practices 250,000x better than "best practices" and go from there.

      To put it another way, the solutions are actually really really good, unfortunately the problems are really really really really hard.

      • jfim 1424 days ago
        Not to detract from your point that aeronautical industry software is reliable (it is), but the 737 MAXes that crashed were all new planes. There wasn't even 24 months between the first delivery of a MAX to the model being grounded.

        The issues with the MAX were also clearly preventable and there were multiple failures of the systems (regulators, internal reviews, etc.) that were in place to catch these kinds of issues.

        But as you point out, the aeronautical industry has an excellent track record for software reliability, if you evaluate reliability by hull losses. By other metrics, it's a bit more debatable (eg. the integer overflow for Dreamliners such that they need to be restarted at least every 248 days), but still keeps people moving safely.

        • Veserv 1424 days ago
          Yes. I included the MAX because otherwise the software-related fatalities over the last 10 years is 0. If you do just the MAX, the low end in terms of flights is ~200,000 with an average of 3 hours per flight. Using the same time basis above, that is 1 in 6,000,000 or 99.99998% uptime which is 600x better than AWS by my previously used metric. The software of an unconscionable deathtrap is 600x better than extremely high quality server software.

          My primary point is that many people look at these failures and incorrectly conclude that the processes in place are objectively terrible and below average. This leads to them discounting the processes in these systems in favor of policies from vastly less reliable systems that they think are quality-focused or "best practices" because they, fairly, think "bad" in a safety-critical context means the same as regular "bad", so regular "amazing" is clearly better. In truth, "unconscionable deathtrap" and "gross criminal negligence" in the airplane world is more of a synonym for "amazing beyond belief" in the rest of the software industry. The correct takeaway is understanding that regular "amazing" is actually orders of magnitude worse than "unconscionable deathtrap" and is thus completely inadequate for the job. As a corollary, if you do not think you are doing "way better than amazing" you are probably not doing an adequate job in these contexts.

          To reiterate, the solutions are really really good, unfortunately the problems are really really really really hard.

      • dahart 1424 days ago
        I do totally agree with your larger points, but these numbers just don’t make any sense, and analysis like this could do unintended damage to your otherwise good points. Would it perhaps be better to cite the industry testing practices and procedures, the volume of testing, the regulations, training, feedback loop, redundancies, and all the other safety efforts behind airline software?

        Uptime is not a comparable metric in any way. Aircraft computers often reboot every flight or every day. AWS downtimes don’t typically result in fatalities. The fall time of the 737 MAX before it impacts isn’t ‘downtime’, and simply cannot be used to summarize the reliability of aviation software as a whole. Arriving at 250000x this way makes it a meaningless number, and you didn’t account for the bug in the linked article in your reliability estimate at all.

        • Veserv 1423 days ago
          No, not really. How would a normal software engineer evaluate the processes if stated? There is no frame of reference for what is effective or not if you do not trace to quantitative outcomes. Like, if I said: "The industry uses an autoregressive failure model with 175 billion parameters, 10x more than any previous non-sparse failure model." would that mean anything (it does not, I just replaced "language" with "failure" in the GPT-3 abstract). How can anybody tell what is an effective or ineffective process if they do not trace to an actual outcome? 10x times as many tests and code mean nothing if they test nothing of value. Redundancies are irrelevant if they are completely correlated. Regulations mean nothing if they encode ineffective or meaningless techniques (look at security standards which require antiviruses). One of the only ways to compare processes and not be tricked by fancy words, especially as a non-expert, is to look and compare actual outcomes.

          I somewhat agree that the metric I chose is somewhat sloppy, but you can afford to be sloppy when you are comparing things with such disparate outcomes. Sure, maybe we are not comparing a 1 story house to a 50 story skyscraper, it is only a 30 story skyscraper, but that has little impact on the fact that they are fundamentally different and to declare that they are even remotely comparable is a massive category error.

          I, however, disagree that "uptime" is a nonsense metric, though there are absolutely better ones. "Uptime" in this context means duration/probability of critical operational failure which is an extremely relevant metric. That AWS does not result in fatalities during critical operational failure has no bearing on whether critical operational failure occurred or not, it just means that it matters less. A valid quibble is that I am using crashes as a proxy for failure which discounts critical software failures that did not cause critical operational failure due to non-software redundancy, but again, the outcomes are so disparate it beggars belief that this would bridge the gap.

          As for aircraft computers being rebooted frequently, true. So? I am comparing full system reliability during operation, not individual components. It is not like individual AWS servers run indefinitely; they are rebooted frequently, but the system as a whole stays operational due to redundancy and migration.

          The reliability estimate does account for the bug. The bug did not cause a critical operational failure. It could cause a critical operational failure in an extremely unlikely case if it remained undetected and no measures were taken to avoid or correct for it. However, it was detected and countermeasures have been put into place, so the processes in place continue to achieve their intended goal of preventing critical operational failure. So, the outcome-based estimate continues to be accurate.

          Just to be clear, an outcome-based estimate is not perfect. By its nature, it only looks at the past, so has no true predictive power. You can not use an outcome-based estimate to predict the effects of process changes. However, it is a relatively unbiased way of evaluating if prior processes were effective which we can use to inform us which processes of the past were actually effective or not and the effects of process changes.

    • cryptonector 1424 days ago
      This is not about saving money! You can't simply shutdown manufacturing of Intel or other chips that have Spectre/Meltdown issues because that would leave us with essentially no usable CPUs for new computers!

      The Spectre/Meltdown issues are deep and architectural, not simple to fix. It's not just a batch of CPUs that's the problem, but all of them.

      Besides, if a CPU ships with a bug that can be fixed via a microcode patch, then it would be a tremendous economic waste for all humanity to throw those CPUs out.

      Even when new CPUs come out that can be shown not to have Spectre/Meltdown issues, it will take a long time to replace the installed base of those that do because it's not a matter of a little bit of money, but a matter of a great deal of money and opportunity costs.

      So microcode patches and software mitigations is all there is. Absolutist attitudes don't help.

    • Ididntdothis 1424 days ago
      “But the first time you see airplane software malfunction, that means you need to change the way the software is written and released so that the whole class of issues will not ever happen again. “

      This sounds pretty good in theory but in practice you will just trade the current set of issues against new issues.

      In reality, Systems and their interactions are so complex that there is no amount of software design that can avoid bugs and fixing them. We sure can improve but it would be naive to think you can design 100% reliability into something like an airplane.

    • jacquesm 1424 days ago
      You are about 100% right on the mark here. There is only one slight problem: people don't want to pay for very high quality software except in a very limited number of fields.

      In a way every real software improvement (not fancy language flavor 'x' of the year but entirely new ways of developing software) have always been with the main goals of writing software with fewer bugs faster.

      That's the whole reason we have abstractions, compilers, syntax checkers, statical analyzers and so on. In spite of all those, software still has bugs and budgets are still not sufficient to write bug free software.

      On another note: this problem is getting worse over time. As tools improved codebases got larger and the number of users multiplied at an astounding rate resulting in many more live instances of bugs popping up. After all, software that contains bugs but that is never run is harmless, only when you run buggy software many times does the price of those bugs really add up.

      Somewhere we took a wrong turn and we decided that more of the same is a better way to compete than to have one of each that is perfected and honed until the bugs have been (mostly...) ironed out.

    • sokoloff 1424 days ago
      > If you have to hire mathematicians to formally prove the critical paths of the software, you do that. If it costs 10x more to release bug-free software, oh well, you do that.

      If you’re trying to keep planes from crashing at all costs, sure. If you’re trying to reduce deaths from travel, that’s a terrible plan. Every family that you price out of commercial air travel and convert over to private auto travel instead has been placed at significantly higher risk as a result of the excessive pursuit of safety.

      It’s the reason the FAA allows lap infants under 2 years old. Not because that’s “safe” in absolute terms, but because it’s safer than the likely alternative.

    • ashtonkem 1424 days ago
      On one hand, I understand your sentiment, on the other hand even with these bugs air travel is as safe as it’s ever been. We’ve reached a point where fewer people die in air travel per year than at any other point in the history of air travel, and that’s before you account for the number of miles travelled. It’s almost ridiculous how safe air travel is on average.
      • martinald 1424 days ago
        That was true until 737 MAX, which statistically must have been one of the most dangerous planes (or jets at least) in history. Very few miles and 2 complete hull loss incidents very close together. These bugs really do matter. You can have quite a lot of minor issues and get away with it, but when you hit a serious failure like the MAX had, even if only triggered 1 in 10,000 flights ends up with an awful lot of casualties.
        • phire 1424 days ago
          MCAS was not a bug. The software behaved excatly as specified.

          The issue was the specification itself, which assumed pilots would reliably catch the uncommanded trim down, diagnose it and disable the whole electric trim subsystem within seconds of the problem behavior arising.

          That assumption turned out to be massively flawed.

          • heavenlyblue 1424 days ago
            Then it means that they had to formally verify the specification itself.

            It’s not that hard by the way. And they did that, but handwaved the critique - the typical approach of “my guts are probably more correct than maths”.

            • nicoburns 1424 days ago
              Formal verificatiom can't tell you if you're assumptions are off. It can only work from those assumptions.
          • jacquesm 1424 days ago
            Your comment implicitly - and probably unintentionally - appears to assign part of the blame to the pilots, which I think is a very bad thing to do in this particular case.
            • phire 1423 days ago
              Not my intention at all.

              Even if my comment implies that there might be pilot error, pilot error doesn't mean pilot blame.

              In this case, I'm very much of the opinion that the blame either belongs with the official Boeing training program, which didn't correctly train any 737 pilots to correctly handle this scenario.

              Or the blame belongs to the design specification that relied on the assumption pilots would be able to correctly handle this scenario with out even testing that assumption. Or potentially both.

              Even if say 10% of pilots could fluke into handling this scenario without the correct training, doesn't mean the other 90% are to blame for not flukking into a correct solution.

            • shreyansj 1424 days ago
              I think specification here refers to the type specification of the aircraft. It's not putting the burden on the pilots but rather on the lack of pilot training due to Boeing and airlines not wanting to bear the cost of training pilots to a new aircraft type.
              • bsder 1424 days ago
                > airlines not wanting to bear the cost of training pilots to a new aircraft type.

                This is a perfectly reasonable request by the airlines. Some airlines rely on the operational efficiency of a single aircraft type. It lets them interchange parts and people and not have to worry that the wrong airplane is in the wrong spot.

                What is NOT reasonable was Boeing providing an aircraft that actually had MAJOR differences yet claiming it was the same.

                And what makes it particularly stupid is no airline that relies on a single airplane type is going to switch from Boeing to Airbus because they would have to migrate their entire fleet en masse. So Boeing had plenty of time to certify the 737 MAX airframe properly.

            • redis_mlc 1423 days ago
              Incorrect. The Indonesian investigators shared blame between Boeing, mechanics and pilots. (Their NTSB is US-trained.)
              • redis_mlc 1423 days ago
                "Indonesian investigators have determined that design and oversight lapses played a central role in the fatal crash of a Boeing 737 MAX jet in October, according to people familiar with the matter, in what is expected to be the first formal government finding of fault.

                The draft conclusions, these people said, also identify a string of pilot errors and maintenance mistakes as causal factors in the fatal plunge of the Boeing Co. plane into the Java Sea, echoing a preliminary report from Indonesia last year."

                https://www.wsj.com/articles/indonesia-to-fault-737-max-desi...

        • CamperBob2 1424 days ago
          The MAX problems weren't so much software bugs as specification bugs. The software did exactly what it was told to do by criminally-negligent engineering and management personnel.
      • londons_explore 1424 days ago
        commercial air travel.

        Private planes and industrial planes still have an awful safety record.

        Most stats also exclude 'unrelated' deaths which happen during a flight (even though there is a good chance the changes in air pressure, stress, lack of medical care, and cramped conditions at least contributed to the death).

        Stats also often exclude terrorist or war shootdowns of commercial planes, which are starting to become significant.

        • bsder 1424 days ago
          > Private planes and industrial planes still have an awful safety record.

          I don't know about industrial, but I assume "private" is a combination of 1) private pilots suck and 2) too much catering to client.

          Kobe Bryant would be my unfortunate shining example of 2). The pilot either wanted to cater to Kobe or would get fired if he didn't, and so went up in weather that it was stupid to go up in.

          As for 1), I've seen far too many sleep-deprived, hungover, drunk, or stoned private airplane pilots. And this is on top of the fact that they probably aren't the most experienced pilots to begin with. What is it about piloting that seems to attract frat boys who never grew up?

    • LifeLiverTransp 1424 days ago
      Naive does not even begin to describe it. You do not save money by writting software cheap. You are borrowing it from the future as tech debt & hidden bugs. All debts are owned and paid for - one way or another.

      Managers who claim to have overcome this - are paying one credit card with two new.

  • kohtatsu 1424 days ago
    I just flew out of this airport yesterday, only 5 passengers onboard a Bombardier Q400.

    Thankfully there are no nearby hills for this bug to kill anyone there.

    Unrelated, but how many carbon offsets do I buy?