Vulnerabilities found in GE anesthesia machines


86 points | by bookofjoe 103 days ago


  • bookofjoe 103 days ago

    DHS Security Alert issued July 9, 2019

    • bookofjoe 103 days ago

      GE Healthcare Security Alert issued July 8, 2019

      • bookofjoe 103 days ago

        CyberMDX Vulnerability Research & Disclosures (Date of discovery: October 29, 2018)

        GE Healthcare Security Alert issued July 8, 2019

        DHS Security Alert issued July 9, 2019

        • NegativeLatency 102 days ago
          • softwaredoug 102 days ago

            Just had a ct scan from a GE machine. Can’t say I wasn’t wondering about the likelihood some bug would give me too much radiation...

            • ska 102 days ago

              I have no specific knowledge of GE CT scanners, and I've also seen some very crufty GE code which was full of potential issues.

              However, for what it's worth, these sorts of safety systems (and similarly SAR monitors in MRI, etc.) tend to be well validated as part of the overall hazard and risk analysis, and people spend the time here, they tend to have interlocks and other often redundant safety subsystems that work.

              I guess what I'm saying is that I wouldn't expect bugs to trip you up in the primary-yet-dangerous function, as this is where the obvious problem areas are.

              This article describes how the attack surfaces on medical devices aren't good. This is definitely true. Especially with older designs that have been updated over the years but were designed with no network or private network in mind.

              • kwiens 102 days ago

                > Between June 1995 and January 1987, six patients were seriously injured or killed by unsafe administration of radiation from the Therac-25 medical linear accelerator.

                > The Therac-25 software errors that cause radiation overexposures can be reduced down to interface errors. The first of these errors involved the entering of treatment data by the machine operator. Once an operator enters treatment information at the terminal outside of treatment room, the magnets used to filter and control radiation levels are set. There are several magnets, and the process takes about 8 seconds. If the operator makes a very, very quick change of the treatment information, within 1 second, the change is registered. Or, if the operator is rather slow about it, takes more than 8 seconds, the change is also registered. However, if the change occurs within the eight seconds it takes to set the magnets, the change is not detected and the magnets continue to be set up improperly, and thus the level of radiation is set up improperly.

                > The last of the accidents occurred at the Yakima Valley Memorial Hospital. On January 17, 1987 an operator placed a patient on the turntable in the field-light position for small position verification doses. After attempting to administer the treatment dose, the machine shut down with a quick malfunction message and a treatment pause. The operator pushed the "P" button, and the machine paused again. The machine indicated that the patient had received his prescribed 7 rad of treatment. The patient, however, complained of a "burning sensation" and died three months later from complications related to the overdose (Leveson and Turner, 1993, p. 33) .


                • ska 102 days ago

                  Yes, that happened.

                  And partially because the industry learned from Therac-25 (and other issues), collectively it got much better at avoiding this sort of failure mode.

                  I’m not saying it’s perfect, but it is not a high risk scenario for the poster I responded to.

                  • colechristensen 102 days ago

                    The only complex systems I'm really comfortable trusting my life with are aircraft. Why is not the absence of accidents but the NTSB response to them and their public reports.

                    • ska 102 days ago

                      They really do have a good system

                  • seren 101 days ago

                    Actually IEC62304, which describes Sw development lifecycle for medical devices, was precisely written in answer of the Therac accident. All medical devices with Software you use nowadays has to follow IEC 62304.

                    It does not mean that medical device SW is perfect or bug-free, but it means that the manufacturer should demonstrate some level of risk management, verification and validation to the regulatory bodies before being allowed to sell a new product. It is not perfect but you should not get sick because someone pushed some untested bug fix on Friday night and you have an exam on Monday morning on an untested SW release, the process would not allow it.

                    • ska 101 days ago

                      That's not quite right, these things didn't come in with 62304. For a (US based) example, before adopting it the FDA still expected you to demonstrate risk & hazard analysis, V&V, and generally SDLC lifecycle management to your SW products (just like hardware ones). However, they offered no opinion on how you should do it. But the overarching standard you were held to as a manufacturer is (still) ISO13485. And coming from a hardware centric view of things, the FDA wasn't sure how to evaluate SW processes, and different panels did things differently.

                      What IEC62304 adds to the mix is specific guidance on your SDLC process. If I recall correctly you are still not required to audit to in the US but new projects should follow it or demonstrate why they are not.

                      • seren 99 days ago

                        Thanks for the historical perspective.

                • wyldfire 102 days ago

                  It's not terribly likely IMO. (I worked on a CT scanner design team). Worst case scenario IMO is that the scanner would halt mid-scan and the CT tech would repeat the scan, causing accumulated dose. If this iterated several times it could be cause for concern. I think those outcomes are extremely rare and most techs would give up after a couple of aborted scans.

                  • dontbenebby 102 days ago

                    It's happened before and is taught in every human factors course


                    • wyldfire 102 days ago

                      Yeah, it's not that something like that could never ever happen w a CT scanner but Therac-25 was intended as a Radiation Therapy device. The vast majority of CT scanners are strictly diagnostic. So the magnitudes of dose that could even be produced are limited. [1]

                      I encourage concerned customers to ask for their dose report. IMO it's mildly interesting on its own. It's more interesting if your physician or surgeon prescribes follow up tracking to see if tumors return/grow. In that case you have repeated scans over a period of many months to track lesions/tumors and the accumulated dose at the same body location becomes worth more concern.

                      The prescribing physician should balance the harm of dose with the diagnostic concern/risk being evaluated, but there's nothing wrong with advocating for yourself. In general I think the cases where you have repeated scans it's because something terribly serious has been diagnosed that far outweighs the impact of the dose.

                      Also if you're concerned about dose you might be able to use CT scanner dose features as a tool when evaluating different outpatient imaging options. Feel free to debate this with your physician and not necessarily go to the one that they get a kickback from.

                      [1] For a stunning counter-example to my claim here, you can look at the case where the CT tech completed MANY repeated cranial CT scans of a toddler (scanner was mfd by Picker IIRC) --

                    • sjg007 102 days ago

                      CT scanners should have a hardware interlock to prevent that.

                    • kazinator 103 days ago

                      The problem here is that life-threatening machines aren't air-gapped.

                      • tptacek 102 days ago

                        People say this in every IoT discussion (or rather, every IoT vulnerability invites litigation about the premise of IoT), but here's the thing: these systems are not air-gapped, nor will they be. The argument is pointless. The median HN reader would not f'ing believe what kinds of things you'd think would be air-gapped that are not; in fact, they'd probably be gobsmacked by what wasn't air-gapped in the 1990s, when you'd use X.25 NUAs to reach them instead of URLs.

                        • tedunangst 102 days ago

                          Actually, this seems like an area where some very literal litigation may make a difference? If your anesthesia machine is discovered to be plugged into the wrong network, your surgery license is revoked.

                          I am generally opposed to liabilities for software defects, but system integration seems a more reasonable place to apply rules.

                          • deogeo 102 days ago

                            So because convincing manufacturers to air-gap devices is difficult, we should instead try for the easier goal of making bug-free software and hardware???

                            • tptacek 102 days ago

                              Because convincing manufacturers to air-gap devices is intractable, we must instead work towards bug-free software and hardware.

                              • deogeo 102 days ago

                                Air-gapped systems far outnumber bug-free ones, so isn't that goal, by your own logic, even more pointless?

                                • tptacek 102 days ago


                                  • deogeo 102 days ago

                                    Let me put it this way - how would you achieve non-exploitable systems? Not just 'move towards them', but actually reach that goal?

                                    Because from what I can tell, it would require formal proof that the connected parts are safe (logic bugs can also be exploited, so just using a safe language is insufficient).

                                    If you can't get manufacturers to do the cheap, easy air-gapping, how will you get them to do the very expensive and time-consuming formal verification, that only a tiny, tiny fraction of developers is proficient in? Intractable doesn't even begin to describe it!

                                    • tptacek 102 days ago

                                      There is nothing we're going to do that will immediately result in non-exploitable systems, and, as with all sorts of other kinds of risks, we're going to have to mitigate rather than eliminate this risk.

                                      Nobody is going to air-gap anything and I think there is ample evidence over the last 20 years that the trend goes in the opposite direction. Given the reality of the world we actually live in, productive conversations about securing embedded and IoT systems can't involve air gaps.

                                      Slightly later

                                      Here's another, simpler way to say the same thing: you might just as productively argue that these products shouldn't be built at all.

                            • departure 102 days ago


                              It's kind of like how I've seen you comment multiple times about how software shouldn't be written in C. You even said it during the DNS fuzzing thread earlier today.

                              You saying that is just as pointless. You would be very surprised at how much new stuff is written in C!

                              • tptacek 102 days ago

                                No, software written in C is and will continue to be replaced by software written in modern memory safe languages; the trend is strong and growing, to the point where we tend to look askance at (1) new software written in C and (2) popular software written in C that doesn't need to be written in C.

                                The opposite is the case for network connectivity: the trend is demonstrably and decisively towards increased connectivity, and "air-gapping" is not taken seriously in the industry. There is no indication of that changing.

                                The comparison you're making is invalid. I'm not trying to score points; I'm observing that the litany of "this should be air-gapped" complaints isn't productive, because things are not going to be air-gapped.

                                (My full-time job is assessing software security and my background is in C software vulnerability research, so it's less likely that I'd be "surprised" by a new C package than that I'd warn my clients to avoid it, and flunk it in vendorsec assessments.)

                                • ziddoap 102 days ago

                                  >I'm observing that the litany of "this should be air-gapped" complaints isn't productive, because things are not going to be air-gapped.

                                  So, how would one be productive about it?

                                  Your post seems to say, and please correct me if I am wrong, that because things aren't currently happening and there are some barriers to making it happen, we should all give up on pushing for it? To me, that seems like a rather fatalist attitude to have. Do we apply this line of thinking to everything? Or just air-gapping?

                                  • tptacek 102 days ago

                                    Well, I guess I'm going to say something challenging here: give up on air-gapping, since it's not going to happen. Revise your premises to assume technology that can be connected will be connected, and proceed accordingly.

                                    I am not, by the way, happy about this, but I've also spent essentially a lifetime (minus maybe 13-14 years at the beginning) having all the surprise on this particular issue knocked out of me.

                                    • hackinthebochs 102 days ago

                                      Sure, companies won't air gap willingly. But legislation can fix that. I see no reason why this world (or this country) is one where such legislation necessarily cannot happen.

                                      • ziddoap 102 days ago

                                        It's really a shame that a (relatively easy to implement) solution exists to a problem, that could potentially save lives in this case, should be left to the wayside and a new solution needs to be invented. Which, that solution may also not be implemented.

                                        I guess I have a little bit of surprise left in me on this issue.

                                        • closeparen 102 days ago

                                          It's not a solution, it's a layer. Vulnerabilities still matter behind an airgap. A hospital is a large, semi-public facility. Patients are left alone in their rooms with network drops. There are legitimate business needs to transfer records in from and out to other institutions; who's to say they can't contain exploit payloads? There are contractors, vendors, and high-turnover low-skilled staff circulating every day. And even if there weren't, if you've been thinking of the airgap as a "solution" and not keeping up with patches, the first person to cross will have a ridiculously easy time with whatever's inside.

                                          It's good to raise the bar from drive-by internet strangers to people and organizations willing to take mild physical risks, but it's not a panacea.

                                          • ziddoap 101 days ago

                                            I suppose I could have been more precise in my wording, and clarified that I see it as a solution to a piece of the puzzle. Indeed, you do word it better in saying it is a layer. I agree. It is a solution to facet of a problem which exists at a certain layer.

                                            I don't quite know how my comment led you to believe that I think airgapping is a pancea which solves all the existing computer woes in the world.

                                            I certainly don't think, and didn't intend to imply, that airgapping removes the risk from contractors or a reason to not keep up on patches. Again, I'm confused how you reached that conclusion based on my comment.

                                            • aptwebapps 101 days ago

                                              Unless the person you're replying to thinks you are personally currently maintaining such equipment, that's a general 'you'.

                                              "And even if there weren't, if you've been thinking of the airgap as a "solution" and not keeping up with patches, ..."

                                              Nobody here is going to say airgap and done, but out in the wild they will certainly deprioritize updates on airgapped equipment.

                                              • ziddoap 101 days ago

                                                Well I mean, I said it's a solution. They said it is not a solution, a direct response to what I specifically had said, and followed by directly responding to the rest of my statement. The entire comment seems to be directed at what I said, hinging off my use of "solution".

                                                Perhaps the 'you' was intended to be generalized. I interpreted as directed at me, since the entirety of the comment is directed at me. Maybe I'm mistaken.

                                                The joys of trying to have meaningful conversations over text.

                                              • closeparen 101 days ago

                                                If it’s a solution, legislation should just require it. If it’s one of many possible security controls that will each help a bit, we might need more nuanced and local decision making.

                                        • Spooky23 102 days ago

                                          Think beyond your scope. IT is a tool.

                                          The risks of compromise of an anesthesia machine are scary. It’s also scary that without EMR integration, a dosage might be misreported or an allergy missed.

                                          It’s possible to securely segment a network to defend against these types of risks. The bigger problem here is that the professional practice of IT is such a garbage fire, it’s assumed that the LAN is compromised and airgapping is the responsible choice.

                                          • Kalium 102 days ago

                                            > So, how would one be productive about it?

                                            One might start by observing that there are reasons these things aren't air-gapped. A person could go on to note which of these reasons continue to apply and are considered compelling by those who make the relevant decisions, and thus that air-gapping is likely to persist.

                                            • ziddoap 102 days ago

                                              I wasn't being facetious with my question, it was made in good faith. As I'm not familiar with hospital infrastructure, nor hospital equipment, could you explain to me what reason (other than updates, which is being debated here) an anesthesia machine should not be air-gapped?

                                              • Kalium 102 days ago

                                                Updates, remote monitoring and remote management all come to mind. Further, integration with other instruments is something that is sometimes considered valuable and difficult to do when everything is airgapped from everything else.

                                                • jacquesm 102 days ago

                                                  Updates should not happen while a patient is under treatment using the machine. Remote monitoring can be done through an airgap (one way optical bridge) that does not have the ability to influence the machine. Remote management while a patient is being treated sounds wildly irresponsible.

                                                  So maybe the system could be connected to the network except for when it is treating a patient. A big red slider with 'On Air/Isolated' could be present which would lock out the patient treating options as soon as the machine is networked. Now, this would still leave some gaps: an update could be faulty, a malicious actor could install something that triggers only after a while or when the machine is used to treat a patient. But it would remove a lot of the concern I have with equipment like this being online all the time.

                                          • ska 102 days ago

                                            This is true, and important (air-gapping isn't, for the most part, going to happen)

                                            Specifically speaking to medical devices, there is a bit of systemic bias. Early hospital networks were isolated, and this led to some naive thinking about security (e.g. DICOM protocol has no auth provisions). Many early machines weren't connected to anything at all, then they were but only "safe" networks so not much care taken. Development lifecycles on complicated devices is incremental over sometimes even decades. But now hospitals etc. have lots of pressure to connect all the things, for all kinds of good and productive reasons, and air-gapping just isn't a realistic solution.

                                            It's not an easy problem to solve, especially quickly.

                                            You are starting to see some partial hardening of devices, which will probably be the practical solution.

                                            • jacquesm 102 days ago

                                              More to the point: almost every safety device installed in medical systems can be over-ridden during emergencies.

                                              So even if you can't normally access certain records or data, if you claim it is an emergency you suddenly can!

                                              • ska 101 days ago

                                                This isn't true of most hardware devices at least - there are emergency shutdown procedures etc. But, to the original question, there isn't an "emergency override" that would let me change the dose or override the interlocks on received dose.

                                                There are sometimes calibration modes or research keys that let you do odd things to, say, imaging machines. But there are also processes in place to disallow clinical use.

                                            • merlincorey 102 days ago

                                              > No, software written in C is and will continue to be replaced by software written in modern memory safe languages; the trend is strong and growing, to the point where we tend to look askance at (1) new software written in C and (2) popular software written in C that doesn't need to be written in C.

                                              When will we get an ANSI standard for one of these new fangled languages?

                                              Go is captive to Google, and Rust is more akin to C++ than C -- neither have an actual standard with competing compilers.

                                              Until that changes, I think plenty of new software will be written in C, especially freestanding and embedded software.

                                              • tptacek 102 days ago

                                                Rust will displace C in embedded applications, and Mozilla will gradually POC out its displacement in browser software as well. Swift is and will continue to displace C/ObjC in client application software on Apple platforms. Java is displacing C on other mobile platforms. Go is already displacing C in serverside software. And, of course, languages like Python have overwhelmingly replaced most of what C used to be used for in conventional CRUD applications.

                                                C is on the way out. It's not gone, but it's going. There will not be a revitalization of the language. It'll be good to know, because some things (OS kernels and device drivers) will continue to be written in C for the foreseeable future. But that's a very small niche relative to the industry as a whole.

                                                Nobody cares, and nobody should care, about ANSI standards.

                                                • nsajko 102 days ago

                                                  Why would you use Rust on embedded hardware? Seems like C is a better match for embedded considering it has no borrow checker to complicate things, and the borrow checker is less useful when you do not have a heap.

                                                  • arcticbull 102 days ago

                                                    The borrow checker isn't nearly as big a deal as you make it out to be from a complexity perspective. You have to take some time to learn the paradigm but when you do it's easy. The challenge is people assume it writes like C because it looks like C, and are frustrated when it doesn't. After a couple of months, you'll realize the borrow checker is your friend, you just need to feed it stuff in a way it understands.

                                                    The borrow checker is great for all sorts of things, like ensuring that you only have a single mutable reference or an arbitrary number of immutable references to objects (to prevent corruption) and for the pseudo-threaded model interrupts entail. Memory safety matters in embedded systems, too. You can encode state cleanly with ADT enums [2] where the language can statically detect invalid state transitions.

                                                    It's also a much more expressive language, providing you with great abstractions with no added cost. The ergonomics are pretty great compared to dealing with C once your peripherals / register accesses are wrapped, as often are already. For instance, check out the stm32 package [1].

                                                    Being able to rely on the compiler more means you get correct code faster, and debugging on embedded systems can be absolute hell. Anything that helps me avoid it is a massive win in my books.



                                                    • 0815test 102 days ago

                                                      First of all, the borrow checker is useful any time you have data that's potentially being accessed from multiple "places" in the code. At a bare minimum, it forces you to mark such data with type constructors such as Cell<> that make it clear where error-prone shared state can occur. Secondly, Rust is an appealing alternative to C++ as a "better C" language. C++ used to be quite acceptable as such, but recent C++ standards come with a huge amount of incidental complexity, so moving to something cleaner can make sense at least for "greenfield" projects.

                                                      • tptacek 102 days ago

                                                        Because Rust is memory-safe and C isn't. I agree that C is easier to write on embedded platforms and kind of enjoy writing that kind of code myself. But shipping a product requires more than just writing the code; it also entails all the verification work you have to do after the code is working to keep the product in the market, and the verification burdens on embedded software are only going to increase as people generally get more clueful about security. Memory-safe languages will (if they aren't already) be a less expensive way to get a product to market.

                                                        • gmueckl 102 days ago

                                                          Protip: look at the tooling requirements for safety critical software (aviation, automotive, industrial, medical...). Nobody is going to allow you to use tools that are not certified for these use cases. Certification of compilers, static checkers etc. Is a huge undertaking that only a few commercial vendors perform on their offerings. Open source is out because nobody even attempts yo run them through that process. What you are left with is highly priced toolchains for Ada, C or C++. The cheapest offerings I know start at high four figures per developer. Validating a new toolchain would likely be to the tune of high 6 up to 7 figures. Don't hold your breath.

                                                          • steveklabnik 102 days ago
                                                            • gmueckl 101 days ago

                                                              Interesting. This is indeed a massive undertaking as it implicitly covers a lot of LLVM as well. It might take a couple of years until a core language is certified.

                                                              • Argorak 96 days ago

                                                                The LLVM got covered as part of ARM switching their toolchain to clang. (FWIW, I'm part of Ferrous Systems)

                                                            • tptacek 102 days ago

                                                              I've done professional validation work in automotive, industrial (and utilities), and medical (I'm a low-level C vulnerability researcher and have been since 1995), and believe you are simply wrong.

                                                              There are, no doubt, a number of niche systems that require specific toolchains. There are, in our fallen world, systems that require Ada or even particular variants of C. If you want to tell me that aviation flight control systems are such a niche, I will believe you --- I've never had to assess one.

                                                              But it is not the case that industrial computing or medical device software are locked into memory-unsafe languages due to industry-wide certificational requirements; in fact, that's something I know not to be true from specific experience. And virtually all of the embedded systems I've had to assess over the years would have benefited, commercially, from a memory-safe implementation language.

                                                              • gmueckl 101 days ago

                                                                You surely must be aware of IEC 61508 and ISO 26262 if you work in that field. These govern automotive software and industrial automation (the later has no domain specific standard). It is easily verified that these standards are adhered to in practice. I worked on IEC 61508 compliant systems. And all these standards require that the tools used for compiling, verifying and testing the software is tested and certified to be correct. This certification is performed by a Notified Body. This is mandated by law for medical systems and the standard procedure for the rest.

                                                                This is a major barrier to entry for new programming languages in these markets. Note that I am not saying that improved memory safety wouldn't be useful in embedded software. But the market is so conservative in parts that real uptake is at least a decade or two away.

                                                              • littlestymaar 99 days ago

                                                                You would be very disappointed if you knew how medical software is made. For instance most of it can crash without issue (and in practice, it does a lot!): all you need is to assess the risks that could result from a crash and mitigate it (for instance: the device should be designed in a way that stops doing anything as soon as its software crashes).

                                                                I've been a contractor for a medical devices company for one year, and the process where really lightyears away from what you are describing. Nothing even forces you to write and run unit tests

                                                                Yes, that's scary.

                                                                • gmueckl 98 days ago

                                                                  Sounds like one of my previous employers.

                                                                  The next company I worked at was industry, not medical, and this one did it right. So while there's black sheep out there, not all of them are.

                                                                  Side note: whether a device can fall back to becoming inert on a failure depends on the type of device and the specific risks involved with that failure mode. It may be that stopping to do anything is the wrong thing to do, e.g. in a blood pump.

                                                                • adrianN 102 days ago

                                                                  All those standards have loopholes that allow you to use tools without certification provided you do the due diligence necessary. I know because I used to develop safety critical software using tools whose only merit was that they were in use before the standards required certification. Basically you run a standardized testsuite and write a document. You don't even have to pass all the tests, you just have to document why test failures don't affect the safety of your product.

                                                                  Since many people are interested in using Rust for such applications, there are efforts underway to stabilize the compiler and do the necessary paperwork so that not every company needs to do it themselves.

                                                                  • gmueckl 102 days ago

                                                                    These loopholes are slowly but surely being removed. DO-178 for aviation is the first standard to do that. I am certain that the others will follow.

                                                                    What test suite are you talking about? I am really curious because this would completely upset the whole industry if what you ssid was true for the toolchain.

                                                                    • adrianN 101 days ago

                                                                      I unfortunately don't remember where we got that testsuite, but it was a fairly basic set of C++ standards compliance tests that mainly checked that the standard library was implemented correctly. We developed according to EN-50128.

                                                              • gmueckl 102 days ago

                                                                I fail to see how a borrow checker can even be remotely useful when there is only one kind of lifetime for everything: until the power to the device is shut off. The only memory-related issues you can have then are out of bounds accesses and using uninitialized or stale data.

                                                                • dralley 102 days ago

                                                                  Session types. Managing hardware state via the type itself, such that it is not possible to create code that does something with (e.g. an IO pin) unless it has first been properly set up, and it's guaranteed at compile time with no runtime checks.

                                                                  Guarding against mutable aliasing with respect to hardware state could be enormously valuable.

                                                                  • gmueckl 101 days ago

                                                                    How is a solution using a session type different from what you get with a C++ object with a constructor?

                                                                    I don't quite understand what you are saying in your second paragraph.

                                                                  • cortesoft 102 days ago

                                                                    What are you talking about? You can certainly have memory related errors on an embedded system.... and the lifetime of memory can often be shorter than the full on cycle of the device.... you can use memory within a function that is not needed outside of it, etc.

                                                                    Borrow checking is unrelated to whether you are on an embedded device or not.

                                                                    • gmueckl 101 days ago

                                                                      If you need reliability and long uptime in an embedded device, dynamic memory management is shunned because it is detrimental to both. The probability of fatal memory fragmentation from bad allocation patterns is too high.

                                                                    • pcwalton 99 days ago

                                                                      This is a misunderstanding of how Rust's borrow check works. It is necessary for avoiding numerous memory safety issues, not just use after free. For example:

                                                                          let mut x: Result<i32,f32> = Ok(1);
                                                                          let y = x.as_ref().unwrap();
                                                                          x = Err(1.0);
                                                                          println!("{}", y); // unsafe cast of float to int
                                                                      This issue was first described to my knowledge by Dan Grossman in "Existential Types for Imperative Languages". The context was trying to make unions work in a safe dialect of C (Cyclone).
                                                                      • tptacek 102 days ago

                                                                        I don't know why we're arguing about the utility of the borrow checker in static memory systems since the borrow checker is obviously not the only difference between Rust and C in that setting; we talk about Rust in embedded systems because it doesn't have a heavyweight runtime that requires (in practice) garbage collection to function, and so is suitable for environments where Java and Go aren't tenable.

                                                                        • gmueckl 101 days ago

                                                                          We are not arguing these points because they were not brought up and do not really distinguish rust from other system languages.

                                                                  • nsajko 102 days ago

                                                                    Go has a battle-tested specification and multiple implementations.

                                                                    • arcticbull 102 days ago

                                                                      There's llgo [1] too, which in my opinion is how Go should have been developed in the first place, instead of on the decrepit plan 9 toolchain, but hey, at least we have it haha.


                                                                    • AnimalMuppet 102 days ago

                                                                      > Rust is more akin to C++ than C -- neither have an actual standard with competing compilers.

                                                                      C++ doesn't have an actual standard? Are you sure about that?

                                                                      • merlincorey 102 days ago

                                                                        Rust and Go is what I was referring to in that case -- sorry for the confusion.

                                                                        Yes both C and C++ have standards, and I consider that to be a good thing, though apparently tptacek says I shouldn't care.

                                                            • AnimalMuppet 102 days ago

                                                              There's two problems with airgapping. First, how does the machine get updates? Do you mail a DVD to the anesthesia techs? Do they know how to install such a thing, or are they going to mess up the machine in the attempt? Or do you roll a service tech in a truck - to all the machines installed worldwide? (Yes, I know, if the alternative to rolling trucks is letting J Random Hacker play with your machine while it's keeping a patient alive, you'd better roll trucks. But updates are one reason why stuff winds up connected to the net rather than airgapped.)

                                                              The second reason why stuff is not airgapped is that it almost certainly connects to the hospital's patient records system. They have to keep a record of everything that happened until all potential lawsuits time out. Just in case there are complications from the surgery, they need all the records from the anesthesia machine uploaded after every use. So the anesthesia machine has to be on the same network as the patient records system - and so does every other medical device in the entire hospital. That network should be completely isolated from outside, but to do so, you have to airgap the network, not just one machine. Yes, it should be done, but that's harder to do, and harder to maintain.

                                                              For that matter, I've wondered, when you visit someone in the hospital, if you plug your laptop into the ethernet jack in their room and start looking around, what do you see?

                                                              • kazinator 102 days ago

                                                                > First, how does the machine get updates?

                                                                Like every non-networked digital camera I've ever owned: put some fwupdate.bin file on a SD-card, plug it in and run some procedure on the device.

                                                                If a DVD are used, the techs can locally burn an .iso image onto a blank as an alternative to getting it in the mail.

                                                                Logs can also be gathered from the machine on removable media.

                                                                Medical records can be available via some non-airgapped laptop. Maybe something can be integrated into the machine, but the control of actual dangerous parameters for therapy or anaesthesia should be air-gapped. Manual entry only.

                                                                Operators can still be duped into entering malicious values manually, but at least there is a fighting chance for some oversight.

                                                                • Spooky23 102 days ago

                                                                  So, by removing visibility, monitoring and discovery, you would improve oversight by having nurses transcribe data into machines and techs service them with manual USB keys?

                                                                  Unless the IT team consists of Santa’s elves, that’s just not going to happen.

                                                                  • gruez 102 days ago

                                                                    >[...] put some fwupdate.bin file on a SD-card, [...]

                                                                    >[...] the techs can locally burn an .iso image onto a blank [...]

                                                                    >Operators can still be duped into entering malicious values manually, but at least there is a fighting chance for some oversight.

                                                                    or something like stuxnet.

                                                                  • etimberg 102 days ago

                                                                    Send a field engineer around. These machines cost a lot of money, surely the cost of having someone come and perform updates wouldn't be crazy

                                                                    • est31 102 days ago

                                                                      The record keeping can be done via data diodes. As for the updates, they can be done by trained hospital staff, doesn't even have to be medical. Remote updates give the manufacturer remote killing abilities. Don't think that such power concentration is a good idea. For governments you kinda need it, but outside of them there is no need.

                                                                      • craigsmansion 102 days ago

                                                                        > First, how does the machine get updates?

                                                                        By eliminating updates. Why isn't software that runs on such machines proven correct? Yes, it's expensive, but medical equipment commands a premium price since it's supposed to meet stringent standards regardless. There's no reason to not up the standards for the software they run.

                                                                        • AnimalMuppet 102 days ago

                                                                          Proven correct? No. It's extensively verified and validated, but that's not the same thing.

                                                                          First, what are you going to prove? That it does what it's supposed to? No, because you have to have some formal definition of "what it's supposed to", and that can actually be wrong (see MCAS for an example - the software worked, the problem is that the spec was insane).

                                                                          Second, as somebody (Donald Knuth?) said, it's amazing how many bugs there can be in proven-correct (or formally verified) software. Does your proof prove against all possible bugs? No. Against all possible security issues? Still no.

                                                                          Third, even with those limitations, proven correct software is much harder (slower, and more expensive) to write, because the proof is really long. You want medical devices to have proven correct software? That would be nice. But we'd have maybe 2% of the medical devices that we have now, just because the proofs are so hard. That would be less nice. In particular, more people would die in that scenario from the absence of medical devices, than die in the existing scenario from bugs in them. That's less than helpful.

                                                                          • craigsmansion 102 days ago

                                                                            > That it does what it's supposed to?

                                                                            and that is doesn't do what is not supposed to do.

                                                                            As for your second point, if you could provide some background reading for that, I would appreciate it. I know of Knuth jokingly stating "Beware of bugs in the above code; I have only proved it correct, not tried it", but if he or someone else wrote a more serious paper on the occurrence of software errors in proved software, I would appreciate a link. A proof should prove against all software errors--if it doesn't I'm eager to learn more.

                                                                            As for your third point: software errors are a solved problem, but as you noted, it's a hard (expensive) solution. The solutions also currently may limit functionality, so it's unfortunately not applicable everywhere, but some classes of software should warrant it.

                                                                            In my view medical devices that can directly interact with a patient's physiology is one of them.

                                                                            • AnimalMuppet 101 days ago

                                                                              I don't have any links to offer. The best I can do is an argument. Here's a tool or method that lets you prove that there are no security holes. Now here comes someone who identifies a completely new class of exploits. Does the proof prove that the software is immune to the new class of exploit? Maybe - it depends on the details of the new class of exploit, and on the details of the proof.

                                                                              So the proof that software is secure may be invalidated by new attacks. And this is (part of) why "proof against all software errors" is... let us say overly optimistic.

                                                                              Even when it's not security issues... do you have a list of all possible software errors, so that you can prove that none of them occur? I don't. I'm not sure that anyone does. I suspect that there's always some kind of error that isn't covered.

                                                                              • craigsmansion 101 days ago

                                                                                Is suspect we're referring to different types of proof here. The sort of formal proof I refer to guarantees that the software will never display unspecified behaviour; your procedures are now backed by mathematical definitions with known properties.

                                                                                It's not about security. It's about the software functioning correctly so no functional updates are necessary, diminishing the need for remote updates and all the security problems that bring along.

                                                                                Even if remote access is needed to conveniently store various data in a different location there is no need to run everything on the same level. Smartphones are mass-produced computers that separate concerns with having a baseband and a main cpu. There is no reason a medical device could not have such a separation. One OS to keep the device working correctly under all circumstances, one OS to slap networking and interfacing on.

                                                                                I don't expect the medical technology industry to change quickly, but as things are they are adopting bad practices of software development and moving in the wrong direction.

                                                                                "If problems pop up, we'll update it later remotely" should not be an acceptable mindset for some classes of computer equipment.

                                                                        • idbehold 102 days ago

                                                                          Both those problems are not my problem. I am perfectly fine with putting those problems on the hospitals and medical device manufacturers.

                                                                          • Kalium 102 days ago

                                                                            Their solution has been connectivity.

                                                                            What now?

                                                                            • snlnspc 102 days ago

                                                                              The hospital's IT staff downloading and preparing update media, which would be the same process for every other device up until connectivity became their solution..

                                                                              • Kalium 102 days ago

                                                                                It may be worth bearing in mind that historically, this often meant updates not going out at all.

                                                                                • adrianN 102 days ago

                                                                                  Which is not that big of a deal if the updates aren't there to fix wormable holes. And in the area of safety, you can encourage people to update their machines by imposing dramatic fines for non-compliance.

                                                                              • idbehold 102 days ago

                                                                                I think I've been pretty clear that it isn't my problem.

                                                                            • seiferteric 102 days ago

                                                                              Hospitals have IT people.

                                                                              • wl 102 days ago

                                                                                > They have to keep a record of everything that happened until all potential lawsuits time out.

                                                                                More importantly, these records often have clinical significance.

                                                                                • sdinsn 102 days ago

                                                                                  > Do you mail a DVD to the anesthesia techs? Do they know how to install such a thing

                                                                                  That sounds like an easy solution to me. God forbid people need to learn something new for their job.

                                                                                • fifteenforty 102 days ago


                                                                                  Anaesthesia machine control should be air-gapped.

                                                                                  Unfortunately, in the modern world of electronic medical records, we do need access to a stream of measurement data. Historically this has been obtained using one-way serial port data streams, which is probably safer than having a network stack accessible to the web.

                                                                                  The users (anaesthesiologists and hospital administrators) don’t understand these problems. (I’m an anaesthesiologist)

                                                                                  • ryacko 102 days ago

                                                                                    Could still be achieved through low data rate UDP with manual acknowledgements of sent data, with inbound data blocked.

                                                                                    • Gibbon1 102 days ago

                                                                                      I think it's a systems issue not really a software one. You need some sort of gap between the control unit (which can kill people) and the monitoring system which shouldn't be able to.

                                                                                      Would be perfectly reasonable to pass monitoring data from the control unit via a tx only channel to a networked monitoring device. For instance using a digital opto-coupler to galvanically isolate the two units.

                                                                                      • ryacko 101 days ago

                                                                                        I believe it would be cheaper, and thus more likely to be adopted if tx only was implemented in software, such as a iptable rule.

                                                                                        Hardware tx only would only make sense if you avoid side channels as well.

                                                                                      • sjg007 102 days ago

                                                                                        manual ack udp? What?

                                                                                    • wl 102 days ago

                                                                                      These machines cannot be airgapped. The anesthesia record from these machines populates into the patient's electronic medical record.

                                                                                      A data diode would be appropriate, though.

                                                                                      • bsder 102 days ago

                                                                                        No, the problem is that these machines aren't read-only.

                                                                                        There should be zero way to do anything remotely to these machines without someone at the local console pressing a button.

                                                                                        • kazinator 102 days ago

                                                                                          If you look at the application and network stack from top to bottom, the implementation of a read-only operation (to get something from the device) is rife with mutations of memory and machine state. Everything from receiving the datagram in the driver, to passing it through the higher layers, to forming the response. Network stacks have been attacked at the data link or network layer with maliciously crafted packets.

                                                                                          • bsder 101 days ago

                                                                                            You can use things like broadcast-only CAN from the anesthesia apparatus to the network server. Now, you can compromise the network server, but the hardware link prevents your system from being able to do anything else.

                                                                                            In fact, if I had to design such a system, this is how I would do it--network server, GUI server, and anesthesia administration would be 3 very separate "instances" with hard communcation links in-between. The GUI->apparatus would be a bidirectional link (probably CAN for reliability) and the GUI->network server would be a separate, unidirectional link only from the GUI to the network server (probably CAN again).

                                                                                            It would simply be far too difficult to verify the reliability of a system that had the network, the console UI and the anesthesia administration apparatus commingled.

                                                                                            • mrob 102 days ago

                                                                                              Why do you need a network stack? Send the data over an RS-422 connection with the RX lines physically disconnected. Send a sequence number and checksum with each transmission so data loss and corruption can be detected, and store a local copy of everything you send, so that it can be retrieved manually if something went wrong with the transmission (which is unlikely, because RS-422 is simple and reliable).