Does memory leak? (1995)

(groups.google.com)

503 points | by rot25 1524 days ago

33 comments

  • derefr 1524 days ago
    Erlang has a parameter called initial_heap_size. Each new actor-process in Erlang gets its own isolated heap, for which it does its own garbage-collection on its own execution thread. This initial_heap_size parameter determines how large each newly-spawned actor’s heap will be.

    Why would you tune it? Because, if you set it high enough, then for all your short-lived actors, memory allocation will become a no-op (= bump allocation), and the actor will never experience enough memory-pressure to trigger a garbage-collection pass, before the actor exits and the entire process heap can be deallocated as a block. The actor will just “leak” memory onto its heap, and then exit, never having had to spend time accounting for it.

    This is also done in many video games, where there is a per-frame temporaries heap that has its free pointer reset at the start of each frame. Rather than individually garbage-collecting these values, they can all just be invalidated at once at the end of the frame.

    The usual name for such “heaps you pre-allocate to a capacity you’ve tuned to ensure you will never run out of, and then deallocate as a whole later on” is a memory arena. See https://en.wikipedia.org/wiki/Region-based_memory_management for more examples of memory arenas.

    • dahart 1523 days ago
      The games and GPU apps I’ve worked on use memory pools for small allocations, where there will be individual pools for all, say, 1-16 byte allocations, 16-64 byte allocations, 64-256 byte allocations, etc. (Sizes just for illustration, not necessarily realistic). The pool sizes always get tuned over time to match the approximate high water mark of the application.

      I think pools and arenas mean pretty much the same thing. https://en.wikipedia.org/wiki/Memory_pool I’ve mostly heard this discussed in terms of pools, but I wonder if there’s a subtle difference, or if there’s a historical reason arena is popular in some circles and pool in others...?

      I haven’t personally see a per-frame heap while working in console games, even though games I’ve worked on probably had one, or something like it. Techniques that I did see and are super-common are fixed maximum-size allocations: just pre-allocate all the memory you’ll ever need for some feature and never let it go; stack allocations sometimes with alloca(); and helper functions/classes that put something on the stack for the lifetime of a particular scope.

      • twoodfin 1523 days ago
        I’ve always understood an arena to use a bump pointer for allocation and to support only a global deallocation, as the GP describes.

        A pool or slab allocator separates allocations into one of a range of fixed-size chunks to avoid fragmentation. Such allocators do support object-by-object deallocation.

      • monocasa 1523 days ago
        I've seen Jason Gregory talk about per frame arenas in Game Engine Architecture as a fundamental piece of how the Naughty Dog engines tend to work.

        Totally agreed that they aren't required for shipping great console games (and they're really hard to use effectively in C++ since you're pretty much guaranteed to have hanging references if you don't have ascetic levels of discipline). This is mainly just meant as a "here's an example of how they can be used and are by at least one shop".

        • foota 1523 days ago
          Seems like this could be handled with a wrapper type with runtime checks during debug?

          Like make any pointer to the per frame allocation be a TempPointer or something and then assert they're all gone with a static count variable of them? Then you just have to be cautious whenever you pass a reference to one or convert to a raw pointer.

          I don't think this would be too awful for performance in debug builds.

          • monocasa 1523 days ago
            Yeah, or a generation system where the pointer holds a frame count too that's asserted on deref.

            The point though is that it's a step back still from shared_ptr/unique_ptr by becoming a runtime check instead of compile time.

    • monocasa 1524 days ago
      So I kind of disagree with the idea that arenas are all about deallocation at once. There's other contexts where you have separate arenas but don't plan on deallocating in blocks, mainly around when you have memory with different underlying semantics. "This block of memory is faster, but not coherent and needs to be manually flushed for DMA", "this block of memory is fastest but just not DMA capable at all", "there's only 2k of this memory, but it's basically cache", "this memory is large, fairly slow, but can do whatever", "this block of memory is non volatile, but memory mapped", etc.

      I'd say that arenas are kind of a superset of both what you and I are talking about.

      • hinkley 1523 days ago
        I can’t remember the last time I read C code, but I do recall a particular time when I was reading a library that had been written with a great deal of attention to reliability. The first thing it did was allocate enough memory for the shutdown operations. That way on a malloc() failure, it could still do a a completely orderly shutdown. Or never start in the first place.

        From that standpoint, you could also categorize arenas on a priority basis. This one is for recovery operations, this one for normal operation, and whatever is left for low priority tasks.

        • bch 1523 days ago
          > The first thing it did was allocate enough memory for the shutdown operations.

          That is clever and beautiful. Have to look for chances to do similar to see if I can establish a new habit myself.

        • Someone 1520 days ago
          That strategy is more important on systems that don’t do demand paged virtual memory. In Think Class Library on classic Mac OS, it was called the “Rainy day fund”.

          One can also do that in stages:

          - allocate a large block at startup

          - when running out of memory, reallocate it at a smaller size and warn the user

          - when running out of memory again, free the block and attempt an orderly shutdown.

      • MaxBarraclough 1523 days ago
        Those aren't arenas. I'm inclined to agree with Wikipedia's definition, which does emphasise deallocation all at once:

        > A region, also called a zone, arena, area, or memory context, is a collection of allocated objects that can be efficiently deallocated all at once.

        • monocasa 1523 days ago
          I mean, wiki uses zone and region here as synonyms, so according to wiki that definition applies just as much. And yet:

          https://www.kernel.org/doc/gorman/html/understand/understand...

          Like, as a embedded developer, these concepts are used pretty much every day. And in a good chunk of those, deallocation isn't allowed at all, so you can't say that the definition is around deallocation at once.

          You can also see how glibc's malloc internally creates arenas, but that's not to deallocate at once, but instead to manage different locking semantics. https://sourceware.org/glibc/wiki/MallocInternals

        • Someone 1520 days ago
          can be efficiently deallocated all at once.

          There are implementations of arenas that, basically, act as separate memory managers. You can allocate and free memory inside them at will, but can also deallocate the whole thing in one sweep. The latter can be a lot faster, but of course, it requires you to know you can throw away all that memory in one go (handling web requests is the exemplar use case)

    • saagarjha 1524 days ago
      Note that most general-purpose allocators also keep around internal arenas from which they hand out memory.
      • catblast 1524 days ago
        Not sure how this is related. A general purpose allocator with a plain malloc interface can’t use this to do anything useful wrt lifetime because there is no correlation to lifetime provided by the interface. Internal arenas can be useful to address contention and fragmentation.
        • saagarjha 1524 days ago
          I'm pointing out that an arena is more about "a region of memory that you can split up to use later" than "a region of memory that must be allocated and deallocated all at once".
    • needusername 1524 days ago
      > memory allocation will become a no-op (= bump allocation)

      No, that's a cache miss.

      • Tuna-Fish 1524 days ago
        No, as memory is allocated linearly the cpu prefetchers will most likely keep the heap in cache.
        • needusername 1523 days ago
          No, in practice this is not true. You will also need to write out to memory all the newly allocated objects which you don't need anymore.

          https://podcasts.apple.com/us/podcast/hidden-gc-bandwidth-co...

          • imtringued 1523 days ago
            How is that related to memory allocation costs? We are talking about the cost of obtaining a chunk of memory. The cost of actually creating an object is allowed to be much higher because constructors are allowed to execute any arbitrary code.

            Just think about how expensive it would be to allocate a 3d vector consisting of 3 floats with malloc() 20000 times and then later deallocate it. Nobody is worrying about the cost of writing 3 floats to RAM. Everyone is worrying about the cost of malloc traversing a free list and it causing memory fragmentation in the process. Meanwhile the arena allocator would be at least as efficient as using an array of 3d vectors.

            • needusername 1522 days ago
              > How is that related to memory allocation costs?

              It's a cost that has to be paid when using bump pointer allocation.

              > We are talking about the cost of obtaining a chunk of memory. The cost of actually creating an object is allowed to be much higher because constructors are allowed to execute any arbitrary code.

              Accessing main memory is about two orders of magnitude slower than accessing L1. For that time you can run a lot of arbitrary code that accesses data in L1 and registers.

              > Just think about how expensive it would be to allocate a 3d vector consisting of 3 floats with malloc() 20000 times and then later deallocate it. Nobody is worrying about the cost of writing 3 floats to RAM. Everyone is worrying about the cost of malloc traversing a free list and it causing memory fragmentation in the process. Meanwhile the arena allocator would be at least as efficient as using an array of 3d vectors.

              malloc doesn't mandate free lists, other implementations exist. It's not about relative costs. OP claimed bump pointer allocation to be a "no-op" when it's clearly not.

        • twic 1523 days ago
  • jakeinspace 1524 days ago
    As somebody working on embedded software for aerospace, I'm surprised this missile system even had dynamic memory allocation. My entire organization keeps flight-critical code fully statically allocated.
    • giu 1524 days ago
      I'm always fascinated about software running on hardware-restricted systems like planes, space shuttles, and so on.

      Where can someone (i.e., in my case a software engineer who's working with Kotlin but has used C++ in his past) read more about modern approaches to writing embedded software for such systems?

      I'm asking for one because I'm curious by nature and additionally because I simply take the garbage collector for granted nowadays.

      Thanks in advance for any pointers (no pun intended)!

      • 0xffff2 1524 days ago
        The embedded world is very slow to change, so you can read about "modern approaches" (i.e. approaches used today) in any book about embedded programming written in the last 30 years.

        I currently work on spacecraft flight software and the only real advance on this project over something like the space shuttle that I can point to is that we're trying out some continuous integration on this project. We would like to use a lot of modern C++ features, but the compiler for our flight hardware platform is GCC 4.1 (upgrading to GCC 4.3 soon if we're lucky).

        • AlotOfReading 1523 days ago
          Having worked on embedded systems for a decade at this point, the fact that we allow vendors to get away with providing ancient compilers and runtimes is shameful. We know that these old toolchains have thousands of documented bugs, many critical. We know how to produce code with better verification, but just don't push for the tools to do it.
          • Baeocystin 1523 days ago
            Isn't the key part that these older systems have documented bugs?

            Or, to put it another way, if there's a wasp in the room (and there always is), I'd want to know where it is.

            • AlotOfReading 1523 days ago
              That doesn't end up being the case for a number of reasons. Firstly, no one is actually able to account of all of these known issues a priori. I don't like calling things impossible, but writing safe C that avoids any compiler bugs is probably best labeled as that.

              Secondly, vendors make modifications during their release process, which introduces new (and fun!) bugs. You're not really avoiding hidden wasps, just labeling some of them. If you simply moved to a newer compiler, you wouldn't have to avoid them, they'd mostly be gone (or at worst, labeled).

              • Baeocystin 1523 days ago
                Are the newer compilers truly that much better? I've been working in tech since the 90's, and I can't say that for the tools I've used I've noticed any great improvement in overall quality- bugs get swatted, new ones created in what feels like a constant measure. I am assuming that many optimizations are turned off regardless, due to wanting to keep the resulting assembly as predictable as possible, but I do not work in the embedded space, so this is perhaps a naive question.
            • imtringued 1523 days ago
              I think the idea is that you don't want a whole wasp nest. Just a bunch of stray wasps.
        • harryf 1524 days ago
          I wonder if the same is true of Space X?
          • monocasa 1523 days ago
            Yeah. AFAIK they use FreeRTOS for the real deeply embedded stuff which would look very familiar to this discussion.
        • cblum 1522 days ago
          If you don’t mind me asking, how could one get into this field if they’re already an experienced software engineer in the more “vanilla” stuff (web services, etc.)?
        • bargle0 1523 days ago
          How do you do CI/CD for embedded systems?
          • jakeinspace 1523 days ago
            CI during the first phases of development in my experience is now often done with modern tooling (gitlab CI, Jenkins), compiling and running tests on a regular Linux x86 build server. Later phases switch over to some sort of emulated test harness, with interrupts coming from simulated flight hardware. Obviously the further along in the development process, the more expensive and slow it is to run tests. Maybe some software groups (SpaceX?) have a process that allows for tight test loops all the way to actual hardware in the loop tests.
          • hyldmo 1523 days ago
            I can't speak for what the rest of the industry does, but some chip manufacturers provide decent emulators, so you can run some tests there. We have also done some hardware tests where we connect our hardware to a raspberry pi or similar and run our CI there. It doesn't replace real-world testing, but it does get us some of the way there.
        • rowanG077 1524 days ago
          I find it interesting that such critical code is written in C. Why not use something with a lot more (easily)statically provable properties. Like Rust or Agda?
          • amw-zero 1524 days ago
            You’ll find that for very serious, industrial applications, a conservative mindset prevails. C may not be trendy at the moment, but it powers the computing world. Its shortcomings are also extremely well known and also statically analyzable.

            Also, think about when flight software started being written. Was Rust an option? And once it came out, do you expect that programmers who are responsible for millions of people’s lives to drop their decades of tested code and development practices to make what is a bet on what is still a new language?

            What I find interesting is this mindset. My conservativeness on a project is directly proportional to its importance / criticality, and I can’t think of anything more important or critical than software that runs on a commercial airplane. C is a small, very well understood language. Of course it gives you nothing in terms of automatic memory safety, but that is one tradeoff in the list of hundreds of other dimensions.

            When building “important” things it’s important to think about tradeoffs, identify your biases, and make a choice that’s best for the project and the people that the choice will affect. If you told me that the moment anyone dies as a result of my software I would have to be killed, I would make sure to use the most tried-and-true tools available to me.

            • dodobirdlord 1523 days ago
              > Also, think about when flight software started being written. Was Rust an option?

              It wasn't, but Ada probably was (some flight software may have been written before 1980?), and would likely also be a much better choice.

            • spencerwgreene 1523 days ago
              > I can’t think of anything more important or critical than software that runs on a commercial airplane.

              Nuclear reactors?

              • samatman 1523 days ago
                Arguably, the existence of nuclear reactors which don't fail safe under any contemplated crisis is a hardware bug. It's possible to design a reactor that can be ruptured by a bomb or earthquake, which will then dump core into a prepared area and cool down.

                This kind of physics-based safety is obviously not possible for airplanes.

                • imtringued 1523 days ago
                  What triggers the core dump? Humans? Software? Are there detectors integrated into the walls?
                  • samatman 1522 days ago
                    Physical rupture of containment.

                    If all electronics fry simultaneously, then the reactor core cools in-place.

              • amw-zero 1523 days ago
                I should have said “commercial airplanes are among the most important and critical things that use software.” It’s obviously difficult to determine the objective most important use case.
            • rowanG077 1523 days ago
              Ada has existed for 40 years. This directly means it has nothing to do with being conservative.
              • amw-zero 1523 days ago
                And how many people know Ada vs. C? Orders of magnitude more right?

                I think that’s the problem here - it’s important to analyze orders of magnitude accurately. C isn’t a little more conservative than Rust or Ada. It is orders of magnitude more conservative.

            • AtlasBarfed 1523 days ago
              You're advocating throwing baby out with bathwater.

              Rust interops with C seamlessly, doesn't it? You don't have to throw out good code to use a better language or framework.

              C may be statically analyzable to some degree, but if Rust's multithreading is truly provable, then new code can be Rust and of course still use the tried and true C libraries.

              Disclaimer: I still haven't actually learned any Rust, so my logic is CIO-level of potential ignorance.

              • jschwartzi 1523 days ago
                The issue is that you’re trading a problem space that is very well understood for one that isn’t. Making a safe program in C is all about being explicit about resource allocation and controlling resources. So we tend to require that habit in development. It’s socialized. The only thing you’d be doing is using technology to replace the socialization. And you’d be adding new problems from Rust that don’t exist in the C world.

                It’s tempting in a lot of cases to read the data sheet and determine that the product is good enough. But there are a lot of engineering and organizational challenges that aren’t written in the marketing documents.

                Those challenges have to be searched for and social and technological tools must be developed to solve those challenges.

                As an exercise in use of technology it looks easy but there’s an entire human and organizational side to it that gets lost in discussions on HN.

              • wallacoloo 1523 days ago
                > Rust interops with C seamlessly, doesn't it?

                From someone who works in a mixed C + Rust codebase daily (Something like 2-3M lines of C and 100k lines of Rust), yes and no. They're pretty much ABI compatible, so it's trivial to make calls across the FFI boundary. But each language has its own set of different guarantees it provides and assumes, so it's easy to violate one of those guarantees when crossing a FFI boundary and triggering UB which can stay hidden for months.

                One of them is mutability: in C we have some objects which are internally synchronized. If you call an operation on them, either it operates atomically, or it takes a lock, does the operation, and then releases the lock. In Rust, this is termed "interior mutability" and as such these operations would take non-mutable references. But when you actually try that, and make a non-mutable variable in Rust which holds onto this C type, and start calling C methods on it, you run into UB even though it seems like you're using the "right" mutability concepts in each language. On the rust side, you need to encase the C struct inside of a UnsafeCell before calling any methods on it, which becomes not really possible if that synchronized C struct is a member of another C struct. [1]

                Another one, although it depends on how exactly you've chosen to implement slices in C since they aren't native: in our C code we pass around buffer slices as (pointer, len) pairs. That looks just like a &[T] slice to Rust. So we convert those types when we cross the FFI boundary. Only, they offer different guarantees: on the C side, the guarantee is generally that it's safe to dereference anything within bounds of the slice. On the rust side, it's that, plus the pointer must point to a valid region of memory (non-null) even if the slice is empty. It's just similar enough that it's easy to overlook and trigger UB by creating an invalid Rust slice from a (NULL, 0) slice in C (which might be more common than you think because so many things are default-initialized. a vector type which isn't populated with data might naturally have cap=0, size=0, buf=NULL).

                So yeah, in theory C + Rust get along well and in practice you're good 99+% of the time. But there are enough subtleties that if you're working on something mission critical you gotta be real careful when mixing the languages.

                [1] https://www.reddit.com/r/rust/comments/f3ekb8/some_nuances_o...

                • a1369209993 1523 days ago
                  > On the rust side, it's that, plus the pointer must point to a valid region of memory (non-null) even if the slice is empty.

                  Do you have a citation for that, because it seems obviously wrong[0] (since the slice points to zero bytes of memory) and I'm having trouble coming up with any situation that would justify it (except possibly using a NULL pointer to indicate the Nothing case of a Maybe<Slice> datum)?

                  0: by which I mean that Rust is wrong to require that, not that you're wrong about what Rust requires.

                  • wallacoloo 1523 days ago
                    Well the docs have this to say [1]:

                    `data` must be non-null and aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as data for zero-length slices using NonNull::dangling().

                    So yes, this requirement allows optimizations like having Option<&[T]> be the same size as &[T] (I just tested and this is the case today: both are the same size).

                    I'm not convinced that it's "wrong", though. If you want to be able to support slices of zero elements (without using an option/maybe type) you have to put something in the pointer field. C generally chooses NULL, Rust happens to choose a different value. But they're both somewhat arbitrary values. It's not immediately obvious to me that one is a better choice than the other.

                    [1] https://doc.rust-lang.org/std/slice/fn.from_raw_parts.html

                    • a1369209993 1523 days ago
                      > [1] https://doc.rust-lang.org/std/slice/fn.from_raw_parts.html

                      Thanks.

                      > having Option<&[T]> be the same size as &[T]

                      That is literally what I mentioned as a possible reason ("except possibly ..."), but what I overlooked was that you could take a mutable reference to the &[T] inside a Option<&[T]>, then store a valid &[T] into it - if NULL is allowed, you effectively mutated the discriminant of a enum when you have active references to its fields, violating some aspect of type/memory safety, even I'm not sure which.

                      > C generally chooses NULL, Rust happens to choose a different value.

                      It's not about what pointer value the langauge chooses when it's asked to create a zero-length slice, it's about whether the language accepts a NULL pointer in a zero-length slice it finds lying around somewhere.

              • NextHendrix 1523 days ago
                Wanting to suddenly start using rust would mean putting any and all tools through a tool qualification process, which is incredibly time consuming and vastly expensive. In the field of safety critical software, fancy new languages are totally ignored for, at least partially, this reason. What's really safer, a new language that claims to be "safe" or a language with a formally verified compiler and toolchain where all of your developers have decades of experience with it and lots of library code that has been put through stringent independent verification and validation procedures, with proven track record in multiple other safety critical projects?
              • ajxs 1523 days ago
                Rust's official documentation on FFI ( https://doc.rust-lang.org/nomicon/ffi.html ) recommends using an external crate 'libc' to facilitate even the minimal FFI functionality. This crate is not part of Rust itself. It is apparently maintained by some of Rust's developers, but again, this is not an official Rust component. To me this does not seem like the kind of mature design you would rely on for interoperability with other languages.
                • cesarb 1523 days ago
                  Actually, Rust's std itself depends on that same libc crate, so it's a bit hard to say it's "not part of Rust itself".
                  • ajxs 1522 days ago
                    Uh... Am I misunderstanding something here, doesn't that just make the situation even more dire?
              • UncleMeat 1523 days ago
                Rust/C interop still has major challenges. It isn't seamless.
              • prostheticvamp 1523 days ago
                > Disclaimer: I still haven't actually learned any Rust, so my logic is CIO-level of potential ignorance

                And yet you seem to write with such confidence. /Are/ you a CIO? It’s the only thing that makes sense.

          • NobodyNada 1524 days ago
            Using a newer language carries a lot of risks and challenges for embedded programs:

            - There’s a high risk of bugs in the compiler/standard library in languages with lots of features

            - Usually, the manufacturer of an embedded platform provides a C compiler. Porting a new compiler can be a LOT of work, and the resulting port can often be very buggy

            - Even if you can get a compiler to work, many newer languages rely on a complicated runtime/standard library, which is a deal-breaker when your complete program has to fit in a few kilobytes of ROM

          • retrac 1524 days ago
            I think the answer was right there in their comment. "The compiler for our flight hardware platform is GCC 4.1 (upgrading to GCC 4.3 soon if we're lucky)".

            Often, the only high-level language available for an embedded platform is a standard C compiler. If you're lucky.

          • bdavis__ 1524 days ago
            Ada is used a fair amount in high $ projects. Toolchains are expensive, and the C compiler is provided for free from the chip / board vendor.
          • MiroF 1524 days ago
            Because safety critical fields are also slow-moving.
          • hechang1997 1523 days ago
            Isn't Ada already used in areospace industry?
      • enriquto 1524 days ago
        > Where can someone (i.e., in my case a software engineer who's working with Kotlin but has used C++ in his past) read more about modern approaches to writing embedded software for such systems?

        The JPL coding guidelines for C [1] are an amusing, first-hand read about this stuff. Not sure if you would qualify them as "modern approaches".

        [1] https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...

        • ibrault 1523 days ago
          I can testify first-hand that the "functions in a single page" and "avoid the pre-processor" rules are not followed very closely haha
        • Cyph0n 1523 days ago
          > A minimum of two runtime assertions per function.

          I am guessing the idea is to catch runtime errors in the test phase, and assertions are disabled for the production build.

      • saagarjha 1524 days ago
        Searching for things like "MISRA C" and "real-time computing" will help you get started.
        • giu 1524 days ago
          Thanks a lot for the keywords; these are very good starting points to look for further stuff on the topic!

          Didn't know that there was a term (i.e., real-time computing) for this kind of systems / constraints.

          • monocasa 1523 days ago
            I'd also look a the Joint Strike Fighter C++ Coding Standard. Stroustrup himself hosts it as an example of how C++ is a multi paradigm language that you can use a subset of to meet your engineering needs.

            http://www.stroustrup.com/JSF-AV-rules.pdf

        • jakeinspace 1524 days ago
          My older co-workers have some great alternative definitions of that initialism.
      • diego 1523 days ago
        If you want a "toy" example of this type of code, look at flight control software for drones such as Betaflight. You can modify this code and test it in real life. I did this, as I contributed the GPS Rescue feature. I have a blooper reel of failures during testing.

        https://github.com/betaflight/betaflight/

      • amelius 1524 days ago
        Just read docs that were written in the 70s, before the advent of garbage collection.
        • p_l 1523 days ago
          Garbage Collection is from 1959, though - and Unix & C's original model pretty much matches "bump allocate then die" with sbrk/brk and lack of support for moving.

          Fully static allocation is the norm though for most "small" embedded work.

    • dahart 1523 days ago
      Note the story isn’t detailed enough to know whether they were using what we’d normally call dynamic memory allocation. The embedded system might not have had a memory manager. Or they might have been, like you, fully statically allocating the memory. Kent could be noting that they’ll run off the end of their statically allocated memory, or run out of address space, because the code isn’t checking the bounds and may be doing something like appending history or sensor data to an array. I have no idea obviously, just imagining multiple ways Kent’s very brief description could be interpreted, it maybe shouldn’t be assumed that the engineering was doing something stupid or even very different from what we’d do today.
    • 2OEH8eoCRo0 1524 days ago
      This right here. My previous job was in defense and although it was not an embedded project all the software architects on the project were embedded guys used to doing things their way. Dynamic allocation was strictly forbidden.
    • bootloop 1524 days ago
      I would imagine it might make sense if you offload some short, less frequent but memory intensive sub-routines (sensors, navigation) to run in parallel to the rest of the system. But I would still avoid having a system wide dynamic memory management and just implement one specifically for that part.
      • Dylan16807 1524 days ago
        Whichever ones you allow to run in parallel need to have enough memory to run at the same time, but such a situation might happen quite rarely.

        In other words, that sounds like a system where dynamic memory management is significantly riskier and harder to test than usual!

        Why not static allocation, but sharing memory between the greedy chunks of code that can't run parallel to each other? (I assume these chunks exist, because otherwise your worst-case analysis for dynamic memory would be exactly the same as for static, and it wouldn't save you anything.)

        • bootloop 1523 days ago
          > Why not static allocation, but sharing memory between the greedy chunks of code that can't run parallel to each other?

          That's what I wanted to say with my comment actually.

      • bdavis__ 1524 days ago
        when you design the system, you make sure there is enough physical RAM to do the job. Period. the problem space is bounded.
    • onceUponADime 1523 days ago
      DO 187A ?
      • p_l 1523 days ago
        DO-178C, actually, but good pointer.
  • zdw 1523 days ago
    Another "works because it's in a missile and only has to run for as short time" story:

    Electronics components for trajectory tracking and guidance for a particular missile weren't running fast enough, namely the older CPU that the software was targeting. The solution to this was to overclock the CPU by double, and redirect a tiny amount of the liquid oxygen that happened also to be used in the propellent system to cool down the electronics.

    This apparently worked fine - by the time the missile ran out of LOX and the electronics burned themselves out, it was going so fast on a ballistic trajectory that it couldn't be reasonably steered anyway.

    The telemetry for the self destruct was on a different system that wasn't overclocked, in case of problems with the missile.

  • Out_of_Characte 1524 days ago
    What an interesting concept. Good programmers always consider certain behaviours to be wrong. Memory 'leaks' being one of them. But this real application of purposefully not managing memory is also an interesting thought exercise. However counter intuitive, a memory leak in this case might be the most optimal solution in this problem space. I just never thought I would have to think of an object's lifetime in such a literal sense.

    Edit; ofcouse HN reacts pedantic when I claim good programmers always consider memory leaks wrong. Do I really need to specify the obvious every time?

    • blattimwind 1524 days ago
      Cleaning up memory is an antipattern for many tools, especially of the EVA/IPO model (input-process-output). For example, cp(1) in preserve hard links mode has to keep track of things in a table; cleaning it up at the end of the operation is a waste of time. Someone "fixed" the leak to make valgrind happy and by doing so introduced a performance regression. Another example might be a compiler; it's pointless to deallocate all your structures manually before calling exit(). The kernel throwing away your address space is infinitely faster than you chasing every pointer you ever created down and then having the kernel throw away your address space. The situation is quite different of course if you are libcompiler.
      • saagarjha 1524 days ago
        > The kernel throwing away your address space is infinitely faster than you chasing every pointer you ever created down and then having the kernel throw away your address space.

        In this case you normally want to allocate an arena yourself.

      • atq2119 1523 days ago
        > Another example might be a compiler; it's pointless to deallocate all your structures manually before calling exit().

        And now the compiler can no longer be embedded into another application, e.g. an IDE.

        It's a reasonably pragmatic way of thinking, but beware the consequences. One benefit of working with custom allocators is that you can have the best of both worlds. Unfortunately, custom allocators are clumsy to work with.

        • badsectoracula 1523 days ago
          Solve the problem you have now, not the problem you may not have later. You can worry about that when the time comes, if it ever comes.

          In the case of compiler, one solution would be to replace all calls to `malloc` with something like `ccalloc` that simply returns pieces of a `realloc`'d buffer which is freed after the in-IDE compiler has finished compiling.

      • zozbot234 1524 days ago
        "Throwing away" a bunch of address space also happens when freeing up an arena allocation, and that happens in user space. This means that you might sometimes be OK with not managing individual sub-allocations within the arena, for essentially the same reason: it might be pointless work given your constraints.
      • ufo 1523 days ago
        Is there is a way to tell Valgrind that a certain memory allocation is intentionally being "leaked", and should not produce a warning?
    • emsy 1524 days ago
      No, they did what good engineers do: They analyzed the problem and found a feasible and robust solution. Following rules without thinking is not what good programmers do. I’d argue that most problems of modern software development stem from this mindset, even when it’s a rule that should be applied 99% of the time.
      • harryf 1524 days ago
        All good until years later, a new team builds, unaware of the leak, same system into a longer range missile...
        • emsy 1524 days ago
          Right, but that doesn’t invalidate the previous decisions made at that time. And it’s not only a problem in software. Accidents can happen because engineers trade material strength to reduce weight only to find out that over a number of generations they slowly crept above the safety margin that was decide upon 10 years ago, resulting in catastrophic failure. I’m sorry I don’t have the actual story at hand right now, but it’s not unimaginable either way. The problem you described has more to do with proper passing of knowledge and understanding of existing systems rather than strictly adhering to a fixed set of best practices.
    • bob1029 1524 days ago
      It is interesting what you can come up with if you rely on constraints in the physical realm to inform your virtual realm choices. I've been looking at various highly-available application architectures and came across a similar idea to the missile equation in the article. If you are on a single box your hands are tied. But, if you have an N+1 (or more) architecture, things can get fun.

      In theory, you could have a cluster of identical nodes each handling client requests (i.e. behind a load balancer). Each node would monitor its own application memory utilization and automatically cycle itself after some threshold is hit (after draining its buffers). From the perspective of the programmer, you now get to operate in a magical domain where you can allocate whatever you want and never think about how it has to be cleaned up. Obviously, you wouldn't want to maliciously use malloc, but as long as the cycle time of each run is longer than a few minutes I feel the overhead is accounted for.

      Also, the above concept could apply to a single node with multiple independent processes performing the same feat, but there may be some increased concerns with memory fragmentation at the OS-level. Worst case with the distributed cluster of nodes, you can simply power cycle the entire node to wipe memory at the physical level and then bring it back up as a clean slate.

  • simias 1524 days ago
    I think it's a bad mindset to leak resources even when it doesn't effectively matter. In non-garbage collected languages especially, because it's important to keep in mind who owns what and for how long. It also makes refactoring easier because leaked resources effectively become some sort of implicit global state you need to keep track of. If a function that was originally called only once at startup is not called repeatedly and it turns out that it leaks some memory every time you know have a problem.

    In this case I assume that a massive amount of testing mitigates these issues however.

    • mannykannot 1524 days ago
      I think you are conflating two issues: while one should understand who owns what and for how long, it does not follow that one should always free resources even when it is not necessary, if doing so adds complexity and therefore more things to go wrong, or if it makes things slower than optimal.

      In this particular case, correctness was not primarily assured by a massive amount of testing (though that may have been done), but by a rigorous static analysis.

      • anarazel 1523 days ago
        Freeing memory also isn't free - the bookkeeping necessary, both at allocation and at free time and potentially also for the allocating code has costs.

        In postgres memory contexts are used extensively to manage allocations. And in quite few places we intentionally don't do individual frees, but reset the context as a whole (freeing the memory). Obviously only where the total amount of memory is limited...

      • jldugger 1523 days ago
        I feel like I've read about some rocket launch failures that were caused in part by launch delays leading to overflow and sign flipping, but can't find it now =/

        It may be unwise to overide static analysis (a leak is found) with hueristics (the program won't run long enough to matter)

        • mannykannot 1523 days ago
          It is not just a heuristic if you have hard upper bounds on the things that matter - in that case, it is static analysis. A missile has a limited, and well-defined, fuel supply.

          In the case of memory management, it is not enough to just free it after use; you need to ensure that you have sufficient contiguous memory for each allocation. If you decide to go with a memory-compaction scheme, you have to be sure it never introduces excessive latency. It seems quite possible that to guarantee a reallocation scheme always works, you have to do more analysis than you would for a no-reallocation scheme with more memory available.

          • jldugger 1522 days ago
            This depends entirely on the mode of operation which I suspect neither of us know in great detail; if in any circumstance the runtime of the program is not tied to expenditure of fuel you have literal ticking time bomb.

            Ideally we'd be able to tie such assertions into a unified static analysis tool, rather than having humans evaluate conflicting analyses. And god forbid the hardware parameters ever change, because now you need to re-evaluate every such decision, even the ones nobody documented. Case in point: Arianne 5 (not exactly my original scenario, but exactly this one -- 64bit -> 16 bit overflow caused a variety of downstream effects ending in mission failure).

            • mannykannot 1522 days ago
              Well, yes, I already explained that it depended on circumstances, and just let me add that I would bet the engineer quoted in the article (explaining that the memory leaks were a non-issue) knew much more about the specifics than either of us.

              The Ariane 5 issue is not, of course, a memory leak or other rescource-release-and-reuse issue. It is a cautionary tale about assumptions (such as the article's authors assumption that memory leaks are always bad.)

        • froh 1523 days ago
    • ptero 1524 days ago
      In a perfect world, yes. But in a hard real time system (and much of missile control will likely be designed as such), timing may be the #1 focus. That is, making sure that events are handled in at most X microseconds or N CPU cycles. In such cases adding GC may open a new can of worms.

      I agree that in general leaking resources is bad, but sometimes it is good enough by a large margin. Just a guess.

      • H8crilA 1524 days ago
        It would be an acceptable solution if the memory supply would vastly outsize the demand, by over an order of magnitude. For example if the program never needed more than 100MiB and you'd install 1GiB or 10GiB. 10GiB is still nothing compared to the cost of the missile, and you get the benefit of truly never worrying about the memory management latency.

        My favorite trick to optimizing some systems is to see if I can mlock() all of the data in RAM. As long as it's below 1TiB it's a no brainer - 1TiB is very cheap, much cheaper than engineer salaries that would otherwise be wasted on optimizing some database indices.

        • bathtub365 1524 days ago
          What’s your rationale for picking an order of magnitude instead of, say, double?
          • H8crilA 1523 days ago
            10x is a very safe margin. I suppose 2x is fine if you really know your code (usually you don't, really, unless you wrote all of it yourself).
            • bathtub365 1523 days ago
              Well, I’m not sure I’d call buying 1TiB of RAM and mlock’ing it all an optimization.
              • H8crilA 1523 days ago
                It is not an "optimization" in the sense that it's not engineer's work.

                It is in the sense that it gets the speedup job done.

                • bathtub365 1522 days ago
                  That’s just throwing hardware at a performance problem, not optimization.
                  • H8crilA 1522 days ago
                    Name it however you want, gets the job done (sometimes).
          • lonelappde 1523 days ago
            Double is an order of magnitude.
            • ficklepickle 1523 days ago
              I suppose it is, in binary. Although humans generally use base 10.

              I might have to start saying "a binary order of magnitude" instead of "double" when circumstances call for gobbledygook.

        • bdavis__ 1524 days ago
          there are always constraints. other than the cost of the memory, which may appear minimal, there are many others. for a missile that bigger memory chip may require more current, more current means a bigger power supply, or a thicker wire. might add ounces to the weight. and in this environment, that may be significant (probably not in this specific example, but look at this perspective for every part selected...they sum up)
        • saagarjha 1524 days ago
          > As long as it's below 1TiB it's a no brainer - 1TiB is very cheap, much cheaper than engineer salaries that would otherwise be wasted on optimizing some database indices.

          Until you have ten thousand machines in your cluster…

          • H8crilA 1523 days ago
            I meant 1TiB total.
        • anarazel 1523 days ago
          One TB of memory is actually quite expensive. And uses a fair bit of power.
          • H8crilA 1523 days ago
            Not at all compared to salaries.

            I mean just think about how many VMs you can buy for $200k-$500k/yr (total cost to the company of a senior engineer).

            • imtringued 1523 days ago
              According to AWS you could only afford 10 instances each with 976GB of RAM for that salary (500k). If you were to do nothing but just buy the raw RAM it would cost you 50k. But you also need servers [0] to actually put the ram into. So it's probably closer to 70k. RAM isn't as cheap as you think.

              [0] and a network and network admins and server admins and and and

              • H8crilA 1523 days ago
                So you mean 4-10TiB is the equivalent of a senior engineer via AWS pricing and my rough estimate of the cost of an engineer. I think we agree?
    • alerighi 1524 days ago
      Freeing memory (or running a garbage collector) has a cost associated with it, and if you are freeing memory (or closing files, sockets, etc) before exiting a program it's time wasted, since the OS will free all the resources associated with the program anyway.

      And a lot of languages, and for sure newer version of the JVM, do exactly that, they don't free memory, and doesn't run the garbage collector since the available memory gets too low. And that is fine for most applications.

    • samatman 1523 days ago
      There are a number of old-school C programs that follow a familiar template: they're invoked on the command line, run from top to bottom, and exit.

      For those, it's often the case that they allocate-only, and have a cleanup block for resources like file handles which must be cleaned up; any error longjmps there, and it runs at the end under normal circumstances.

      This is basically using the operating system as the garbage collector, and it works fine.

    • ksherlock 1524 days ago
      "I gave cp more than a day to [free memory before exiting], before giving up and killing the process."

      https://news.ycombinator.com/item?id=8305283

      https://lists.gnu.org/archive/html/coreutils/2014-08/msg0001...

  • lmilcin 1523 days ago
    I once worked on an application which if failed even once meant considerable loss for the company including possible closure.

    By design, there was no memory management. The memory was only ever allocated at the start and never de-allocated. All algorithms were implemented around the concept of everything being a static buffer of infinite lifetime.

    It was not possible to spring a memory leak.

    • conro1108 1523 days ago
      This sounds fascinating, could you elaborate any on why a single failure of this application would be so catastrophic?
      • lmilcin 1523 days ago
        I can't discuss this particular application.

        But there are whole classes of applications that are also mission critical -- an example might be software driving your car or operating dangerous chemical processes.

        For automotive industry there are MISRA standards which we used to guide our development process amongst other ideas from NASA and Boeing (yeah, I know... it was some time ago)

    • voldacar 1523 days ago
      How did this work exactly? the program just never had to work on data greater than a certain statically known size? or did it process anything larger than that in chunks instead of mallocing a buffer of the necessary size?
      • lmilcin 1523 days ago
        Not necessarily. What this means, is you need to have a limit for every data structure in the application and have a strategy on how to either prevent the limit to ever be hit or how to deal when the limit is excercised.

        Imagine a simple example of a webapp and number of user sessions.

        Instead of the app throwing random errors or slowing down drastically, you could have a hard limit on the number of active sessions.

        Whenever the app tries to allocate (find a slot) for a user session but it can't (all objects are already used), it will just throw an error.

        This ensures that the application will always work correctly once you log in -- you will not experience a slowdown because too many users logged in.

        Now, you also need to figure out what to do with users that received an error when trying to log in. They might receive an error and be told to log in later, they might be put on hold by UI and logged in automatically later or they might be redirected by loadbalancer to another server (maybe even started on demand).

        When you start doing this for every aspect of application you get into situation where your application never really gets out of its design parameters and it is one of the important aspect to get an ultra stable operation.

  • crawshaw 1524 days ago
    This is an example of garbage collection being more CPU efficient than manual memory management.

    It has limited application, but there is a more common variant: let process exit clean up the heap. You can use an efficient bump allocator for `malloc` and make `free` a no-op.

    • acqq 1524 days ago
      There was also a variant of it with the hard drives: building Windows produced a huge amount of object files, so the trick used was to use a whole hard disk (or a partition) for that. Before the next rebuild, deleting all the files would took far more time than a "quick" reformatting of the whole hard disk, so the later was used.

      (I am unable to find a link that talks about that, however).

      In general, throwing away at once the set of the things together with the structures that maintain it is always faster than throwing away every item one by one while maintaining the consistency of the structures, in spite of the knowledge that all that is not needed at the end.

      An example of arenas in C: "Fast Allocation and Deallocation of Memory Based on Object Lifetimes", Hanson, 1988:

      ftp://ftp.cs.princeton.edu/techreports/1988/191.pdf

      • GordonS 1523 days ago
        That's quite a clever solution, I doubt I would have thought of that!

        Windows has always been my daily drivers, and I really do like it. But I wish deleting lots of files would be much, much faster. You've got time to make a cup of coffee if you need to delete a node_modules folder...

        • acqq 1523 days ago
          > I wish deleting lots of files would be much, much faster. You've got time to make a cup of coffee if you need to delete a node_modules folder

          The example I gave was for the old times when people had much less RAM and the disks had to move physical heads to access different areas. Now with the SSDs you shouldn't be able to experience it that bad (at least when using lower level approaches). How do you start that action? Do you use GUI? Are the files "deleted" to the recycle bin? The fastest way is to do it is "low level" i.e. without moving the files to the recycle bin, and without some GUI that is in any way suboptimal (I have almost never used Windows Explorer so I don't know if it has some additional inefficiencies).

          https://superuser.com/questions/19762/mass-deleting-files-in...

          • GordonS 1523 days ago
            Even with an SSD, it's still bad. Much better than the several minutes it used to take with an HDD, but still annoying.

            I just tried deleting a node_modules folder with 18,500 files in it, hosted on an NVMe drive. Deleting from Windows Explorer, it took 20s.

            But then I tried `rmdir /s /q` from your SU link - 4s! I remember trying tricks like this back with an HDD, but don't remember it having such a dramatic impact.

            • acqq 1523 days ago
              >>> You've got time to make a cup of coffee if you need to delete a node_modules folder...

              > Deleting from Windows Explorer, it took 20s.

              > `rmdir /s /q` from your SU link - 4s

              OK, so you saw that your scenarios could run much better, especially if Windows Explorer is avoided. But in Explorer, is that time you measured with deleting to the Recycle Bin or with the Shift Delete (which deletes irreversibly but can be faster)?

              Additionally, I'd guess you don't have to wait at all (i.e. you can reduce it to 0 seconds) if you first rename the folder and than start deleting that renamed one and let it doing that in the background while continuing with your work -- e.g. if you want to create the new content in the original location it's immediately free after the rename, and the rename is practically immediate.

              • GordonS 1523 days ago
                I pretty much exclusively use SHIFT-DEL (which has once or twice resulted in bad times!).

                I didn't think about renaming then deleting - that's quite a nice workaround!

        • imtringued 1523 days ago
          What I have noticed is that the CLI commands like rm -rf <dir> are orders of magnitude faster than the file explorer on linux. When I want to remove and then copy 500 .wav files for my anki deck it takes a minute or longer in the file explorer. With rm -rf media.collection/ && cp -rf <dir> media.collection/ it doesn't even take a second.
  • LucaSas 1523 days ago
    This pops up again from time to time, I think what people should take away from this is that garbage collection is not just what you see in Java and other high level languages.

    There are a lot of strategies to apply garbage collection and they are often used in low level systems too like per-frame temporary arenas in games or in short lived programs that just allocate and never free.

    • asveikau 1523 days ago
      Once you set a limit like this, though, it's brittle, and your code becomes less maintainable or flexible in the face of change. That is why a general purpose strategy is good to use.
  • andreareina 1524 days ago
    "Git is a really great set of commands and it does things like malloc(); malloc(); malloc(); exit();"

    https://www.youtube.com/watch?v=dBSHLb1B8sw&t=113

    • jldugger 1521 days ago
      And that really bit hard when you wanted to start running git webservers. All the lib code was designed to exit upon completion with no GC, and now you're running multiple read queries per second with no free(). oops.
  • GordonS 1524 days ago
    A bit OT, but I wonder how I'd feel if I was offered a job working on software for missiles.

    I'm sure the technical challenge would be immensely interesting, and I could tell myself that I cared more about accuracy and correctness than other potential hires... but from a moral standpoint, I don't think I could bring myself to do it.

    I realise of course that the military uses all sorts of software, including line of business apps, and indeed several military organisations use the B2B security software that my microISV sells, but I think it's very different to directly working on software for killing machines.

    • dahart 1524 days ago
      I had a family friend who worked on missiles and drones and other defense systems. He was really one of my dad’s running buddies, and he was a super nice guy, had 4 kids, went to church, etc.

      One day, I believe during the Iraq occupation, maybe ~12 or 13 years ago, I asked him very directly how he felt about working on these killing machines and whether it bothered him. He smiled and asked if I’d rather have the war here in the U.S.. He also told me he feels like he’s saving lives by being able to so directly target the political enemies, without as much collateral damage as in the past. New technology, he truly believed was preventing innocent civilians from being killed.

      It certainly made me think about it, and maybe appreciate somewhat the perspective of people who end up working on war technology, even if I wouldn’t do it. This point of view assumes we’re going to have a war anyway, and no doubt the ideal is just not to have wars, so maybe there’s some rationalization, but OTOH maybe he’s right that he is helping to make the best of a bad situation and saving lives compared to what might happen otherwise.

      • alex_young 1524 days ago
        Costa Rica hasn’t had a standing military since 1948. They are in one of the most politically unstable parts of the world and do just fine without worry of invasion.

        The US hasn’t been attacked militarily on its own soil in the modern era.

        The US military monopoly hasn’t prevented horrific attacks such as 9/11 executed by groups claiming to be motivated by our foreign military campaigns.

        I think there is a valid question about the moral culpability of working in this area.

        • 3pt14159 1524 days ago
          It's a valid question, but realistically if Costa Rica were invaded a number of countries would step in to help them. I love Costa Rica, it's one of the most beautiful countries I've been to and I do appreciate the political statement their making, but at the same time they're in a pretty unique situation.

          As for the ethics of working on weapons, I think there is a lot of grey when it comes to software. It tends to centralize wealth, since once you get it right it works for everyone. It tends to be dual use, because a hardened OS can be used for both banks and tanks. Even developments in AI are worrying because they're so clearly applicable to the military.

          Would I work on a nuclear bomb? No. Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian? Maybe. It's not an all or nothing thing.

          • kragen 1523 days ago
            In the last 40 years, Panama and Grenada were invaded, Honduras had a coup, Colombia had a civil war, Venezuela is currently having a sort of civil war, Nicaragua's government was overthrown by a foreign-armed terrorist campaign, and El Salvador's government sent death squads out to kill its subjects. Nobody stepped in to help any of them except Colombia. Why would Costa Rica be different?

            > Would I work on software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian?

            The logical extreme of this is Death Note: the person who has the power simply chooses who should die, and that person dies, immediately and with no opportunity for resistance and no evidence of who killed them. Is that your ideal world? Who do you want to have that power — to define who plays the role of an “innocent civilian” in your sketch — and what do you do if they lose control of it? What do you do if the person or bureaucracy to which you have given such omnipotence turns out not to be incorruptible and perfectly loving?

            I suggest watching Slaughterbots: https://m.youtube.com/watch?v=9CO6M2HsoIA

            • dahart 1523 days ago
              > The logical extreme of this [...] Is that your ideal world?

              Clearly not. Would you please not post an extreme straw-man and turn this into polarizing ideological judgement? The post you’re responding to very clearly agreed that war is morally questionable, and very clearly argued for middle ground or better, not going to some extreme.

              You don’t have to agree with war or endorse any kind of killing in any way to see that some of the activities involved by some of the people are trying to prevent damage rather than cause it.

              Intentionally choosing not to acknowledge the nuance in someone’s point of view is ironic in this discussion, because that’s one of the ways that wars start.

              • kragen 1523 days ago
                You assert that "software that does a better job of, say, facial recognition to lessen the likelihood of a predator drone killing an innocent civilian" is "middle ground", "not going to some extreme", "trying to prevent damage", and "nuanced".

                It is none of those. It is a non-nuanced extreme that is going to cause damage and kill those of us in the middle ground. Reducing it to a comic book is a way to cut through the confusion and demonstrate that. If you have a reason (that reasonable people will accept) to think that the comic-book scenario is undesirable, you will find that that reason also applies to the facial-recognition-missiles case — perhaps more weakly, perhaps more strongly, but certainly well enough to make it clear that amplifying the humans' power of violence in that way is not going to prevent damage.

                Moreover, it is absurd that someone is proposing to build Slaughterbots and you are accusing me of "turn[ing] this into polarizing ideological judgement" because I presented the commonsense, obvious arguments against that course of action.

                • p1esk 1523 days ago
                  What's your moral stance on developing defense mechanisms against Slaughterbot attacks? What if the best defense mechanism is killing the ones launching the attacks?
                  • kragen 1523 days ago
                    I think developing defense mechanisms against Slaughterbot attacks is a good idea, because certainly they will happen sooner or later. If the best defense mechanism is killing the ones launching the attacks, we will see several significant consequences:

                    1. Power will only be exercised by the anonymous and the reckless; government transparency will become a thing of the past. If killing the judge who ruled against you, or the school-board member who voted against teaching Creationism, or the wife you're convinced is cheating on you, is as easy and anonymous as buying porn on Amazon, then no president, no general, no preacher, no judge, and no police officer will dare to show their face. The only people who exercise power non-anonymously would be those whose impulsiveness overcomes their better judgment.

                    2. To defend against anonymity, defense efforts will necessarily expand to kill not only those who are certain to be the ones launching the attacks, but those who have a reasonable chance of being the ones launching the attacks. Just as the Khmer Rouge killed everyone who wore glasses or knew how to read, we can expect that anyone with the requisite skills whose loyalty to the victors is in question will be killed. Expect North-Korea-style graded loyalty systems in which having a cousin believed to have doubted the regime will sentence you to death.

                    3. Dead-Hand-type systems cannot be defended against by killing their owners, only by misleading their owners as to your identity. So they become the dominant game strategy. This means that it isn't sufficient to kill people once they are launching attacks; you must kill them before they have a chance to deploy their forces.

                    4. Battlefields will no longer have borders; war anywhere will mean war everywhere. Combined with Dead Hand systems, the necessity for preemptive strikes, and the enormous capital efficiency of precision munitions, this will result in a holocaust far more rapid and complete than nuclear weapons could ever have threatened.

                    While this sounds like an awesome plot for a science-fiction novel, I'd rather live in a very different future.

                    So, I hope that we can develop better defense mechanisms than just drone-striking drone pilots, drone sysadmins, and drone programmers. For example, pervasive surveillance (which also eliminates what we know as "human rights", but doesn't end up with everyone inevitably dead within a few days); undetectable subterranean fortresses; living off-planet in small, high-trust tribes; and immune-system-style area defense with nets, walls, tiny anti-aircraft guns, and so on. With defense mechanisms such as these, the Drone Age should be more survivable than the Nuclear Age.

                    But, if we can't develop better defense mechanisms than killing attackers, we should delay the advent of the drone holocaust as long as we can, enabling us to enjoy what remains of our lives before it ends them.

                    • p1esk 1523 days ago
                      You paint a bleak future. Keep in mind though, there have been many dark moments in human history when a lot of people got killed for very bad reasons, and yet here we are.

                      is as easy and anonymous as buying porn on Amazon

                      I'm not sure ease of use is such a game changer. You can buy a drone today, completely anonymously, strap some explosives to it, remotely fly it into someone and detonate it, a few hundred yards away from you. Easily available cheap drones like that existed for at least a decade, yet I don't remember many cases where someone used them for this purpose. Does Slaughterbot-like product existence make it easier? If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4? To a terrorist this technology does not provide that much benefit over what's already available. How about governments? I don't see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR. I doubt there is a lack of trigger happy black ops types (or "patriots") ready to do whatever you can program a drone to do. Here I'm talking about democratic first world governments. It's even less clear if tyrannical governments would benefit a lot from this technology - sending a bunch of agents to arrest and execute people is just as effective. I don't think tactical difficulties of finding and physically shooting people is a big concern for decision makers. As you yourself pointed out, Khmer Rouge or North Korea had no problems doing that without any advanced technology.

                      you must kill them before they have a chance to deploy their forces.

                      Yes. And that's how it has been at least since 9/11 - CIA drone strikes all over the world. Honestly, I'd much rather have them only have done drone strikes if at all possible (instead of invading Iraq with boots on the ground).

                      more rapid and complete than nuclear weapons could ever have threatened

                      Sorry, I'm not seeing it - how would this change major conflicts and battlefields? If you have a battlefield, and you know who your enemy is, you don't really need Slaughterbots - you need big guns and missiles that can do real damage. It's much easier to defend soldiers against tiny drones than against heavy fire. If you don't know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you? As for precise military strikes - we're already doing it with drones, so nothing new here.

                      end up with everyone inevitably dead within a few days enjoy what remains of our lives before it ends them

                      You are being overly dramatic. Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier. Is the world today a scary place to live in? Yes, but for very different reasons - think about what will start happening in a few decades when the global temperature rises a couple degrees, triggering potentially cataclysmic events affecting livelihood of millions, or global pollution contaminating air, water and food to the point where it's making people sick. I really hope we will develop advanced technology by that time to deal with those issues.

                      But of course it's way more fun to discuss advanced defense methods against killer drones. So let's do that :) I was thinking that some kind of a small EMP device could have been used whenever slaughterbots are detected, but after reading a little about EMPs it seems it would not be able to hurt them much because these drones are so small. I don't think nets of any kind would be effective - I just don't see how would you cover a city with nets. Underground fortresses and off-planet camps can only protect a small number of people. In some scenarios some kind of laser based defense system could be effective (deployed in high value/risk environments), and of course we can keep tons of similar drones ready to attack other drones at multiple locations throughout the city. Neither of these seem to be particularly effective against a large scale attack, and both require very good mass surveillance. I think that a combination of very pervasive surveillance with an ability to deliver defense drones quickly to the area of the attack (perhaps carried in a missile, fired automatically as soon as a threat level calculated by the surveillance system crosses some threshold) is the best option. The defense drones could be much more expensive than the attack drones, so be able to quickly eliminate them. Fascinating engineering challenge!

                      • kragen 1523 days ago
                        > CIA drone strikes all over the world

                        The US is known to have carried out drone strikes in Afghanistan, Yemen (including against US citizens), Pakistan, Libya, and Somalia; authority over the assassination program was officially transferred from the CIA to the military by Obama. That leaves another 200-plus countries whose citizens do not yet know the feeling of helpless terror when the car in front of you on the highway explodes into a fireball unexpectedly, presaged only by the far-off ripping sound of a Reaper on the horizon, just like most days. The smaller drones that make this tactic affordable to a wider range of groups will give no such warning.

                        > It’s much easier to defend soldiers against tiny drones than against heavy fire.

                        Daesh used tiny drones against soldiers with some effectiveness, but there are several major differences between autonomous drones and heavy fire. First, heavy fire is expensive, requiring either heavy weapons or a large number of small arms. Second, autonomous drones (which Daesh evidently did not have) can travel a lot farther than heavy fire; the attacker can target the soldiers’ families in another city rather than the soldiers themselves, and even if they are targeting the soldiers directly, they do not need to expose themselves to counterattack from the soldiers. Third, almost all bullets miss, but autonomous drones hardly ever need to miss; like a sniper, they can plan for one shot, one kill.

                        You may be thinking of the 5 m/s quadcopters shown in the Slaughterbots video, but there’s no reason for drones to move that slowly. Slingshot stones, arrows from bows, and bottle-rockets all move on the order of 100 m/s, and you can stick guidance canards on any of them, VAPP-style.

                        > If you don’t know who your enemy is, say terrorists mixed in the crowd of civilians, how would face detection help you?

                        Yes, it’s true that if your enemy is protected by anonymity, face-recognition drones are less than useful — that’s why the first step in my scenario is the end of any government transparency, because the only people who can govern in that scenario (in the Westphalian sense of applying deadly force with impunity) are anonymous terrorists. But if the terrorists know who their victims are, the victims cannot protect themselves by mixing into a crowd of civilians.

                        > Yes, terrorists and evil governments will keep murdering people just like they always have. No, this technology does not make it fundamentally easier.

                        Well, on the battlefield it definitely will drive down the cost per kill, even though it hasn’t yet. It’s plausible to think that it will drive down the cost per kill in scenarios of mass political killing, as I described above, but you might be right that it won’t.

                        The two really big changes, though, are not about making killing easier, but about making killing more persuasive, for two reasons. ① It allows the killing to be precisely focused on the desired target, for example enabling armies to kill only the officers of the opposing forces, only the men in a city, or only the workers at a munitions plant, rather than everybody within three kilometers; ② it allows the killing to be truly borderless, so that it’s very nearly as easy to kill the officers’ families as to kill the officers — but only the officers who refuse to surrender.

                        You say “evil governments”, but killing people to break their will to continue to struggle is not limited to some subset of governments; it is the fundamental way that governments retain power in the face of the threat of invasion.

                        Covering a city with nets is surprisingly practical, given modern materials like Dyneema and Zylon, but not effective against all kinds of drones. I agree that underground fortresses and off-planet camps cannot save very many people, but perhaps they can preserve some seed of human civilization.

                      • kragen 1523 days ago
                        > You can buy a drone today, completely anonymously, strap some explosives to it, remotely fly it into someone and detonate it, a few hundred yards away from you

                        You can even drop a grenade from it; Daesh did that a few hundred times in 2016 and 2017: https://www.bellingcat.com/news/mena/2017/05/24/types-islami... https://www.lemonde.fr/proche-orient/article/2016/10/11/irak...

                        But that’s not face-recognition-driven, anonymous, long-range, or precision-guided; it might not even be cheap, considering that the alternative may be to lob the grenade by hand or shoot with a sniper rifle. If the radio signal is jammed, the drone falls out of the sky, or at least stays put, and the operator can no longer see out of its camera. As far as I know, the signal on these commercial drones is unencrypted, so there’s no way for the drone to distinguish commands from its buyer from commands from a jammer. Because the signal is emitted constantly, it can guide defenders directly to the place of concealment of the operator. And a quadcopter drone moves slowly compared to a thrown grenade or even a bottlerocket, so it’s relatively easy for the defenders to target.

                        > Does Slaughterbot-like product existence make it easier?

                        Yes.

                        > If some terrorist wants to kill a bunch of people, how is it easier than just detonating a truck full of C4?

                        Jeffrey Dahmer wanted to kill a bunch of people. Terrorists want to persuade a bunch of people; the killing is just a means to that end. Here are seven advantages to a terrorist of slaughterbots over a truck full of C4:

                        1. The driver dies when they set off the truck full of C4.

                        2. The 200 people killed by the truck full of C4 are kind of random. Some of them might be counterproductive to your cause — for example, most of the deaths in the 1994 bombing of the AMIA here in Buenos Aires were kindergarten-aged kids, which helps to undermine sympathy for the bombers. By contrast, with the slaughterbots, you can kill 200 specific people; for example, journalists who have published articles critical of you, policemen who refused to accept your bribes (or their family members), extortion targets who refused to pay your ransom, neo-Nazis you’ve identified through cluster analysis, drone pilots (or their family members), army officers (or their family members), or just people wearing clothes you don’t like, such as headscarfs (if you’re Narendra Modi) or police uniforms (if you’re an insurgent).

                        3. A truck full of C4 is like two tonnes of C4. The Slaughterbots video suggests using 3 grams of shaped explosive per target, at which level 600 grams would be needed to kill 200 people. This is on the order of 2000 times lower cost for the explosive, assuming there’s a free market in C4. However...

                        4. A truck full of C4 requires C4, which is hard to get and arouses suspicion in most places; by contrast, precision-guided munitions can reach these levels of lethality without such powerful explosives, or without any explosives at all, although I will refrain from speculating on details. Certainly both fiction and the industrial safety and health literature is full of examples of machines killing people without any explosives.

                        5. A truck full of C4 is large and physically straightforward to stop, although this may require heavy materials; after the AMIA truck bombing, all the Jewish community buildings here put up large concrete barricades to prevent a third bombing. So far this has been successful. (However, Nisman, the prosecutor assigned to the AMIA case, surprisingly committed suicide the day before he was due to present his case to the court.) A flock of autonomous drones is potentially very difficult to stop. They don’t have to fly; they can skitter like cockroaches, fall like Dragons’ Teeth, float like a balloon, or stick to the bottoms of the cars of authorized personnel.

                        6. You can prevent a truck bombing by killing the driver of the truck full of C4 before he arrives at his destination, for example if he tries to barrel through a military checkpoint. In all likelihood this will completely prevent the bombing; if he’s already activated a deadman switch, it will detonate the bomb at a place of your choosing rather than his, and probably kill nobody but him, or maybe a couple of unlucky bystanders. By contrast, an autonomously targeted weapon, or even a fire-and-forget weapon, can be designed to continue to its target once it is deployed, whether or not you kill the operator.

                        7. Trucks drive 100 km/hour, can only travel on roads, and they carry license plates, making them traceable. Laima, an early Aerosonde, flew the 3270 km from Newfoundland to the UK in 26 hours, powered by 5.7 ℓ of gasoline, in 1998 — while this is only 125 km/hour, it is of course possible to fly much faster at the expense of greater fuel consumption. Modern autonomous aircraft can be much smaller. This means that border checkpoints and walls may be an effective way to prevent trucks full of C4 from getting near their destination city, but they will not help against autonomous aircraft.

                        > How about governments? I don’t see it - if a government wants someone dead, they will be dead (either officially, e.g. Bin Laden, or unofficially, Epstein-style). If a government wants a bunch of people dead, the difficulty lies not in technology, but in PR.

                        This is far from true. The US government has a list of people they want dead who are not yet dead — several lists, actually, the notorious Disposition Matrix being only one — and even Ed Snowden and Julian Assange are not on them officially. Killing bin Laden alone cost them almost 10 years, two failed invasions, and the destruction of the world polio eradication effort; Ayman al-Zawahiri has so far survived 20 years on the list. Both of the Venezuelan governments want the other one dead. Hamas, the government of the Gaza Strip, wants the entire Israeli army dead, as does the government of Iran. The Israeli government wanted the Iranian nuclear scientists dead — and in that case it did kill them. The Yemeni government, as well as the Saudi government, wants all the Houthi rebels dead, or at least their commanders, and that has been the case for five years. The Turkish government wants Fethullah Gulen dead. Every government in the region wanted everyone in Daesh dead. In most of these cases no special PR effort would be needed.

                        Long-range autonomous anonymous drones will change all that.

                        > sending a bunch of agents to arrest and execute people is just as effective. ... As for precise military strikes - we’re already doing it with drones, so nothing new here.

                        Sending a bunch of agents is not anonymous or deniable, and it can be stopped by borders; I know people who probably only survived the last dictatorship by fleeing the country. It’s also very expensive; four police officers occupied for half the day is going to cost you the PPP equivalent of about US$1000. That’s two orders of magnitude cheaper than a Hellfire missile (US$117k) but three orders of magnitude more expensive than the rifle cartridge those agents will use to execute the person. The cost of a single-use long-range drone would probably be in the neighborhood of US$1000, but if the attacker can reuse the same drone against multiple targets, they might be able to get the cost down below US$100 per target, three orders of magnitude less than a Hellfire drone strike.

                        It’s very predictable that as the cost of an attack goes down, its frequency will go up, and it will become accessible to more groups.

                        (Continued in a sibling comment, presently displayed above.)

            • imtringued 1523 days ago
              Real weapons are not like that. They are expensive, they can fail to kill their target and they can also cause collateral damage. If death notes were as easy to obtain as guns there would clearly be an increase in homicides but that's not true with military missiles.

              The Slaughterbots video is absolutely awful. First of all quadrocopters have an incredibly small payload capacity and limited flight time. A quadrocopter lifting a shaped charge would be as big as your head and have 5 minutes of flight. Simply locking your door and hiding under your bed would be enough to stop them. The AI aspect doesn't make them more dangerous than a "smart rifle" that shoots once the barrel points at a target.

              Do you know what I am scared of? I am more scared of riot police using 40mm grenade launchers with "non-lethal" projectiles who are knowingly aiming them at my face even though their training clearly taught that these weapons should never be used to aim at someone's head. The end result is lost eyeballs and sometimes even deaths and the people who were targeted aren't just limited to those who are protesting violently in a large crowd. Peaceful bystanders and journalists who were not involved also became victims of this type of police violence. [0]

              [0] https://www.thelocal.fr/20190129/france-in-numbers-police-vi...

              • kragen 1521 days ago
                Before you had posted your comment, I had already explained in https://news.ycombinator.com/item?id=22394213 why everything in it is wrong, except for the first line.

                As for the first line, you assert that real weapons are expensive, unreliable, and kill unintended people. Except in a de minimis sense, none of these are true of knives. Moreover, you seem to be reasoning on the basis of the premise that future technology is not meaningfully different from current technology.

                In conclusion, your comment consists entirely of wishful and uninformed thinking.

            • 3pt14159 1523 days ago
              Eh, there is a difference between the examples you've sited and Costa Rica. They're an ally of the US and a strong democracy focussed on tourism.

              > The logical extreme of this is Death Note

              I don't really deal with logical extremes. It leads to weird philosophies like Objectivism or Stalinism. In international relations terms, I'm a liberal with a dash of realism and constructivism. I don't live in my ideal world. My ideal world doesn't have torture or murder or war of any kind. It doesn't have extreme wealth inequality or poverty. Unless this is all merely a simulation, I live in the real world. Who has the power to kill people? Lots of people. Everyone driving a car or carrying a gun. Billions of people. It's a matter of degree and targeting and justification and blow-back and economics and ethics and so many other things that it's not really sensible to talk about it.

              I'm familiar with the arguments against AI being used on the battlefield, but even though I abhor war, I'm not convinced that there should be a ban.

        • dahart 1524 days ago
          Of course there is a valid question about the morals of war technology. You are absolutely right about that, and I am not even remotely suggesting otherwise. Like I said, I don’t think I would ever choose to work on it.

          There’s a vast chasm in between right and wrong though. There can be understanding of others’ perspectives, regardless of my personal judgement. And there is also a valid question and tightly related question here about the morals of mitigating damage during a military conflict, especially if the mitigation prevents innocent deaths. If there’s a hard moral line between doctors and cooks and drivers and snipers and drone programmers, I don’t know exactly where it lies. Doctors are generally considered morally good, even war doctors, but if we are at war, it’s certainly better to prevent an injury than to treat one.

          The best goal in my opinion is no war.

        • prostheticvamp 1523 days ago
          The US was last attacked in living memory; Pearl Harbor survivors still number > 0.

          I will leave the WTC attack on the table, as I’m not interested in a nitpicking tangent about what constitutes an attack in asymmetric warfare vs. “terrorism.”

          “The modern era” is usefully vague enough to be unfalsifiable.

        • samatman 1523 days ago
          In practice, Costa Rica has a standing military. It's just the US military.

          Due to the Monroe Doctrine, this is a rational stance for Costa Rica to take. If the US were to adopt this policy, Costa Rica might have to take a hard look at repealing it.

      • DavidVoid 1523 days ago
        > New technology, he truly believed was preventing innocent civilians from being killed.

        Drones and missiles are definitely a step forward compared to previous technology in many regards, but I can't help but be reminded of people who argued that the development and use of napalm would reduce human suffering by putting an end to the war in Vietnam faster.

        For an interesting and rather nuanced (but not 100% realistic) view on drone strikes, I'd recommend giving the 2015 movie Eye in the Sky a watch.

        Another issue with drone strikes and missiles is "the bravery of being out of range": it's easier to make the decision to kill someone who you're just watching on a screen than it is to look a person in the eyes and decide to have them killed.

    • jmpman 1524 days ago
      Straight out of college, I was offered a job writing software for missiles. Extremely interesting area, working for my adjunct professor’s team, who I highly admired and whose class was the best of my college career. The pay was on par with all my other offers. I didn’t accept for two reasons.

      First, I logically agreed that the missiles were supporting our armed services and I believed that our government was generally on the right side of history and needed the best technology to continue defending our freedoms. However, a job, when executed with passion, becomes a very defining core of your identity. I didn’t want death and destruction as my core. I support and admire my college friends who did accept such jobs, but it just wasn’t for me.

      Second, I had interned at a government contractor, (not the missile manufacturer), and what I saw deeply disturbed me. I came on to a project which was 5 years into a 3 year schedule, and not expected to ship for another 2 years. Shocked, I asked my team lead “Why didn’t the government just cancel the contract and assign the work to another company?”, her reply, “If they did that, the product likely wouldn’t be delivered in under two years, so they stick with us”. I understood that this mentality was pervasive, and would ultimately become part of me, if I continued to work for that company. That mentality was completely unacceptable in the competitive commercial world, and I feared the complacency which would infect me and not prepare me for the eventual time when I’d need to look for a job outside that company. As a graduating senior, I attended our college job fair, and when speaking with another (non missile) government contractor, I told the recruiter that I was hesitant working for a his company because I thought it wouldn’t keep me as competitive throughout my career. I repeated the story from my internship, and asked if I’d find the same mentality at his company. His face dropped the cheerful recruiter facade, when he pulled me aside and sternly instructed “You should never repeat that story”. I took that as an overwhelming “yes”. So, my concern was that working for this missile manufacturer, this government contractor mentality would work its way into their company (if it hadn’t already), and it would be bad for my long term career. I wanted to remain competitive on a global commercial scale, without relying upon government support.

      • newscracker 1524 days ago
        > I came on to a project which was 5 years into a 3 year schedule, and not expected to ship for another 2 years. Shocked, I asked my team lead “Why didn’t the government just cancel the contract and assign the work to another company?”, her reply, “If they did that, the product likely wouldn’t be delivered in under two years, so they stick with us”. I understood that this mentality was pervasive, and would ultimately become part of me, if I continued to work for that company. That mentality was completely unacceptable in the competitive commercial world, and I feared the complacency which would infect me and not prepare me for the eventual time when I’d need to look for a job outside that company.

        Software for any system is complex. And it’s quite common for almost every software project to be late on schedule. The Triple Constraint — “schedule, quality, cost: pick any two” doesn’t even fit software engineering in any kind of serious endeavor because it’s mostly a “pick one” scenario.

        If you’ve worked on projects where all these three were met with the initial projections, then whoever is estimating those has really made sure that they’ve added massive buffers on cost and time or the project is too trivial for a one person team to do in a month or two.

        The entire reason Agile came up as a methodology was in recognizing that requirements change all the time, and that the “change is the only constant” refrain should be taken in stride to adapt how teams work.

        • AtlasBarfed 1523 days ago
          I vehemently and violently disagree!

          The average project achieves 1.5 of the triples.

          Here are the true constraints though:

          - Schedule - Meets Requirements - Cost - Process - Usefulness/Polish

          Yes, usefulness and meets requirements aren't the same thing, and anyone who has done the madness of large scale enterprise software will be nodding their heads.

          What really bogs down most software projects is that "quality" means different things to different actors in projects. Project Managers want to follow process and meet political goals. Users want usefulness, polish, and efficiency. Directors/management want requirements fulfilled they dictate (often reporting and other junk that don't add to ROI).

          And that I like to say "pick two"

      • Razengan 1524 days ago
        > he pulled me aside and sternly instructed “You should never repeat that story”

        We really need more exposure for the things that people like that want to silence..

        • prostheticvamp 1523 days ago
          He wasn’t silencing anyone. There were no black suits with billy clubs outside.

          He was warning the kid that if he went around repeating that aloud he’d burn himself on the interview trail as someone too naive to tow the corporate line and likely to reveal embarrassing workplace details to outsiders.

          He was doing the naive youngster a favor, before he could hurt his own career.

          The use of the phrase “people like that” is pretty much always pejorative, in a story where a guy who owes the student absolutely nothing took a moment to warn him “don’t touch the stove, you’ll burn yourself”.

          So it’s become a story about government contractors instead of a story about “how I fucked up my job search as a new grad.”

          Thank you, random kind recruiter guy.

          • jmpman 1523 days ago
            Nah, I was smarter than that. I already had multiple offers, and had no intention of working for a government contractor. I mostly wanted to see how this recruiter would react to my flippant statement. If he had vehemently defended his company, it would have implied that the whole government contracting wasn’t quite as disfunctional as I’d experienced. His reaction basically confirmed my suspicions. It was a “don’t talk too loud about what we all know is going on”, along with a “how dare you unmask us”. Sure, it was also a “holy hell, you can’t talk to recruiters like that”.

            But, no black suits with billy clubs.

            And, I’m not suggesting that anything was out of norm for any of these government contractors. They’re delivering a very specialized service with immense regulations. There are very few companies which can produce the same product, so the competition is low and the feedback loop in procurement cycles is much longer.

      • throwaway462564 1524 days ago
        > First, I logically agreed that the missiles were supporting our armed services and I believed that our government was generally on the right side of history and needed the best technology to continue defending our freedoms

        I hope you are not writing about the US government. I don‘t think the US military can be described as protecting our freedoms after interfering and starting wars all over the world in the past. We are sadly mostly the aggressors and not the defendants.

      • jnwatson 1524 days ago
        Sticking with a vendor even though they are very late is quite common among even non-government programs.

        Big projects are hard and they are frequently late. The fact that it is for the government is largely besides the point.

      • tomcam 1523 days ago
        Thank you for a nuanced and very well explained set of reasons. This is a difficult subject to handle dispassionately here and you did an admirable job.
      • AtlasBarfed 1523 days ago
        > our government was generally on the right side of history

        Well, we are the victors, so far. But the war against ourselves is going quite well.

    • GuB-42 1524 days ago
      There are different kinds of killing machines. And accurate missiles are among the least bad.

      With the exception of nuclear weapons (that's another topic), missiles are designed to destroy one particular target of strategic importance and nothing more. They are too expensive as mass killing weapons, but they are particularly appropriate for defense.

      Without missiles, you may need to launch an assault, destroying everything on your way to the target, risk soldier lives, etc... Less accurate weapons mean higher yield to compensate, so more needless destruction.

      War is terrible, but I'd rather have it fought with missiles than with mass produced bombs, machine guns, and worst of all, mines.

      • BiteCode_dev 1524 days ago
        On the other hand, making killing a target easier to do gives you the incentive to do just this instead of trying to find an alternative solution.

        Case in point: currently the country with the best army in the world is also the one going the most at war.

    • jpmattia 1524 days ago
      > but from a moral standpoint, I don't think I could bring myself to do it.

      I've been in a similar situation, and I think there is something important to think about: Assuming you'd be working for the defense of a country with a track record of decency (at least a good fraction of the time anyway), you have to decide what people you want taking those jobs.

      Is it better that all of the people with qualms refuse to take the positions? ie Do you want that work being done by people with no qualms? Because that sounds pretty terrible too.

      • GordonS 1522 days ago
        > Assuming you'd be working for the defense of a country with a track record of decency (at least a good fraction of the time anyway)

        Yes, this is the kicker for me. My country does not have such a record. If it did, the hypothetical quandry would still exist, but would be much diminished.

    • Twirrim 1523 days ago
      > A bit OT, but I wonder how I'd feel if I was offered a job working on software for missiles.

      At one stage in my career I had an opportunity to go work for Betfair. I knew several people there and could bypass most or all of the interview process. At the time a rapidly growing on-line gambling company, wasn't quite the major company it is now. They were paying about half as much again over my existing salary, and technology wise it would have been a good opportunity.

      I ended up having quite a long conversation with a few co-workers around the morality of it. I was against it, for what I thought were pretty much obvious reasons. The house always wins, gambling is an amazing con built up on destroying lives. I don't want to be a part of that, much like I wouldn't work for a tobacco company, oil company etc. Co-workers were taking what they saw as more pragmatic perspective: Gamblers gonna gamble, doesn't matter if the site is there or not.

      • jmpman 1523 days ago
        The reality behind corporate casinos is a bit more disheartening. Using analytics, from their “players club” cards, they know what zip code you’re from, and based upon that, they approximate your income. They know that if you lose too much, then your wife isn’t going to allow you to return to Vegas. Each income level has a certain pain threshold for how much you can lose in a year. The casino’s work very hard to ensure you don’t go over that limit.

        If you’re on a bad losing streak, they’ll send a host over to offer you tickets to a show or a buffet. The goal is to get you AWAY from gambling. They know you’re an addict, but want to keep the addiction going.

        That’s where they cross the ethical line.

    • enriquto 1524 days ago
      > A bit OT, but I wonder how I'd feel if I was offered a job working on software for missiles.

      Unless you are an extreme pacifist (which is a perfectly reasonable thing to be), you'll acknowledge the legitimate need for the existence of an army in your country. In that case, the army better be equipped by missiles than by youths carrying bayonets. Then, there's nothing wrong in providing these missiles with technologically advanced guiding systems.

      On the other hand, if I worked in "algorithmic trading" or fancy "financial instruments" I would not be able to sleep at night without a fair dose of drugs.

      • GordonS 1524 days ago
        It's not that I'm a pacifist, but more that I don't trust my government (UK, but I have the same issues with the US gov too) to do justifiable things with them.

        If they were for defense only, I might be able to do it. But instead they are sold to any government with the means to pay, regardless of their human rights record or how they will be used (e.g. Saudi). Aside from selling them on, they are used in conflicts that are hard to justify, beyond filling the coffers of the rich and powerful. Take the latest Iraq war for example: started based on falsified evidence, hundreds of thousands dead, atrocities carried out by the west, schools bombed, incidents covered up...

        Given these realities, I just couldn't do it.

        My original musing was more thinking along the lines of an ideal world, where I trusted my government; I'm still not sure I could do it.

      • fancyfredbot 1523 days ago
        I suppose we have the 2008 crisis to thank for creating a popular view that finance is an entirely morally corrupt industry. Perhaps it's not surprising given the role "fancy financial instruments" played there. All the same, it strikes me as strange to find moving risk around to be more morally difficult than designing a missile - moving risk around is at least sometimes straightforwardly beneficial for everyone involved, a missile strike less so...
    • cushychicken 1524 days ago
      I recently interviewed, and was offered a job at, Draper Labs in Cambridge MA.

      The technical work was super interesting. Everyone I spoke to was plainly super sharp, and not morally bankrupt. I fielded similar moral concerns as you, but truthfully, I don't really have much of a personal ethical problem with it. I was a little more concerned at having to explain it to all of my friends, many of whom are substantially more liberal leaning in political views than I am.

      Perception, and the pay cut I'd have to take from my current work, ended up being the major things that stopped me from taking it.

      • TedDoesntTalk 1523 days ago
        First time I’ve heard of someone accepting or not accepting a job based on peer perception. Maybe you should re-evaluate who your peers are if they can’t accept you for your career choices?
        • cushychicken 1518 days ago
          That's a pretty reductive and reactionary comment, and I feel like I'm succumbing to some of my worst impulses by responding to it four days later. But, clearly, some point in there has touched a nerve, so here goes:

          For one thing: it missed (or ignored, but I'll default to "missed") my point about the large pay cut involved.

          For another: it stack ranked peer perception as being more important in my decisionmaking than the pay cut. My original comment certainly appears to value perception over pay. That's a huge miscommunication of my priorities, and my fault. I wasn't about to take a 30% pay cut. The fact that I'd also have to withstand the negative scrutiny of my friends and family just made it that much easier to decline.

          For a third: we all hope to do work that we can be proud of. Part of that pride is to be able to hold up the fruits of our labor to others and take pride in having participated in it. I don't think I could have done that without thinking twice had I taken that job - and I'm not just talking about the security clearance angle here. Dealing with the negative reactions from my friends and family would have been a problem for me. Not the biggest one, but a problem. Acceptance is a big precursor to feeling safe, and self actualized. I feel the acceptance of a group of peers now. I do not want to trade that for more intellectually challenging work with an ethical component that my friends and family find questionable. I'd apply this reasoning in equal part to a role where I was paid more, but just as questionable. (I don't want to get rich building technology that enables the next Bernie Madoff, for example.)

          Perhaps I'm of a weaker intellectual constitution than you are to be so easily influenced by the opinion of other people. However, I view that mental flexibility as a strength. I also trust the opinion (and, by extension, the underlying moral character) of my peers, who have been a positive moral force in my life. Hence, it was important to me not to compromise their trust and acceptance of me, and my career choices.

        • DavidVoid 1523 days ago
          Or maybe they trust/value their peer's judgement despite the fact that they themselves don't have any strong views on the subject?
          • TedDoesntTalk 1523 days ago
            Relying on other's opinions to make one of the most important decisions in your life -- where you will spend 40 hours or more week -- is pathetic. It's one thing to "trust/value" their peer's judgement, but it's quite another for their opinions to make your decisions for you. Good luck with that. Haha.
    • TedDoesntTalk 1523 days ago
      To offer a contrarian point of view: I’d jump at such an opportunity... to work in such a technically interesting area AND help keep my country technologically relevant. It’s a no-brainer.
      • GordonS 1523 days ago
        For me (and I would imagine a lot of people), it's far from a no-brainer.

        While it would undoubtedly be interesting from a technical standpoint, there is a serious moral conundrum - even if it was an ideal world where you trusted your government not to start wars based on flimsy or falsified evidence, start wars for profit, or sell weapons to less scrupulous governments.

        • TedDoesntTalk 1523 days ago
          That’s fine. I am expressing my opinion and you yours. I don’t trust my government with everything, but I’d much rather keep the status quo than see China or another country reign in my lifetime.
        • lonelappde 1523 days ago
          Would you rather live in a world dominated by USA or USSR or China or Nazi Germany?

          Remember you don't get to take away everyone's missiles.

          • GordonS 1523 days ago
            The world isn't binary; I don't think the options you laid out are the only possibilities.

            I take your point though, and I'd have much less of a dilemma if the missiles in question were not to be sold to other governments, and only to be used for domestic defence or a clear world-threat type scenario. Which for many Western countries is of course not going to happen.

    • mopsi 1524 days ago
      > I'm sure the technical challenge would be immensely interesting, and I could tell myself that I cared more about accuracy and correctness than other potential hires... but from a moral standpoint, I don't think I could bring myself to do it.

      Why? The more precise missiles are, the better. If no-one agreed to build missile guidance systems, we'd still have carpet bombing and artillery with 100m accuracy.

      • saagarjha 1524 days ago
        People might use them less, though.
        • TedDoesntTalk 1523 days ago
          How naive. More likely they’d be used just as often, but more civilians would die. See the v1 and v2 nazi rockets, for example, which didn’t have any software.
    • tomcam 1523 days ago
      I mean all due respect with this question. Not an attack. Do you think your country should have such missiles? If not, how would you handle the defense case in which your country is attacked but does not have them? Also note that most of Europe is defended by these missiles made in the USA.
      • int_19h 1523 days ago
        In the real world, the dilemma is more often, "do you think your country, and all other countries it considers allies, should have such missiles"?

        Right now, for example, Saudi Arabia is bombing Yemen with American-made bombs, and Turkey is using German tanks and Italian drones to grind Kurdish towns and villages into rubble in Syria.

      • GordonS 1523 days ago
        I've mentioned this a few times already in other comments, but I'd be in much less of a quandry about missiles used for defense purposes. Especially so if they could only be used for shooting down other missiles.
    • ezoe 1524 days ago
      Well, there is a SAM system which is designed to kill missiles, not the humans.

      That said, I think any software development which involves the government aren't fun at all for all the bureaucracies and inefficiency.

    • lainga 1523 days ago
      Cue the picture of the protestors holding a sign reading "this machine kills innocents!"... next to a MIM-104. There are many types of missiles.
    • kebman 1524 days ago
      Oh, I'm sure you'd make a killing! :D <3
  • tyingq 1524 days ago
    Until the cruise missile shop down the hall decides to reuse your controller.
    • gameswithgo 1524 days ago
      If all software is built to protect against all possible future anticipated use cases, your software will take longer to make, perform worse, and be more likely to have bugs.

      If all software is built only to solve the problem at hand, it will take less time to develop, be less likely to have bugs, and perform better.

      It isn't clear that coding for reuse is going to get you a net win, especially since computing platforms, the actual hardware, is always evolving, such that reusing code some years later can become sub-optimal for that reason alone.

      • tyingq 1524 days ago
        Fair, but the leaks apparently weren't documented well, or the linked story wouldn't have read like it did.
      • eru 1524 days ago
        There's a middle ground. Eg the classic Unix 'cat' (ignoring all the command line switches) does something really simple and re-usable, so it makes sense to make sure it does the Right Thing in all situations.
        • thaumasiotes 1524 days ago
          I mean, 'cat' does something so simple (apply the identity function to the input) that it has no need to be reusable because there's no point using it in the first place. If you have input, processing it with cat just means you wasted your time to produce something you already had.
          • derefr 1524 days ago
            The point of cat(1), short for concatenate, is to feed a pipeline multiple concatenated files as input, whereas shell stdin redirection only allows you to feed a shell a single file as input.

            This is actually highly flexible, since cat(1) recognizes the “-“ argument to mean stdin, and so you can `cat a - b` in the middle of a pipeline to “wrap” the output of the previous stage in the contents of files a and b (which could contain e.g. a header and footer to assemble a valid SQL COPY statement from a CSV stream.)

            • thaumasiotes 1524 days ago
              But that is a case where you have several filenames and you want to concatenate the files. The work you're using cat to do is to locate and read the files based on the filename. If you already have the data stream(s), cat does nothing for you; you have to choose the order you want to read them in, but that's also true when you invoke cat.

              This is the conceptual difference between

                  pipeline | cat       # does nothing
              
              and

                  pipeline | xargs cat # leverages cat's ability to open files
              
              Opening files isn't really something I think of cat as doing in its capacity as cat. It's something all the command line utilities do equally.
              • derefr 1524 days ago

                    pipeline | cat    # does nothing
                
                This is actually re-batching stdin into line-oriented write chunks, IIRC. If you write a program to manually select(2) + fread(2) from stdin, then you’ll observe slightly different behaviour between e.g.

                    dd if=./file | myprogram
                
                and

                    dd if=./file | cat | myprogram
                
                On the former, select(2) will wake your program up with dd(1)’s default obs (output block size) worth of bytes in the stdin kernel buffer; whereas, on the latter, select(2) will wake your program up with one line’s worth of input in the buffer.

                Also, if you have multiple data streams, by using e.g. explicit file descriptor redirection in your shell, ala

                    (baz | quux) >4
                
                ...then cat(1) won’t even help you there. No tooling from POSIX or GNU really supports consuming those streams, AFAIK.

                But it’s pretty simple to instead target the streams into explicit fifo files, and then concatenate those with cat(1).

                • thaumasiotes 1523 days ago
                  > Also, if you have multiple data streams, ...then cat(1) won’t even help you there.

                  I've been thinking about this more from the perspective of reusing code from cat than of using the cat binary in multiple contexts. Looking over the thread, it seems like I'm the odd one out here.

          • eru 1523 days ago
            In addition to what the other commenters pointed out about cat being able to concatenate, even using cat as the identity function is useful. Just as the number zero is useful.
        • gameswithgo 1524 days ago
          For sure, if you can apply a small amount of effort for a high probability of easy re-usability, do it. But if you start going off into weird abstract design land to solve a problem you don't have yet, while it might be fun, probably you should stop. At least if it is a real production thing you are working on.
          • eru 1523 days ago
            I guess it depends a bit on the shape of your abstract design land. Sometimes it can give you hints about how your API should look like, or what's missing.
    • stefan_ 1524 days ago
      Remember when the CIA contracted with Netezza to improve their predator drone targeting, who then went and reverse-engineered some software from their ex business partner IISI and shipped that?

      IISi’s lawyers claimed on September 7, 2010 that “Netezza secretly reverse engineered IISi’s Geospatial product by, inter alia, modifying the internal installation programs of the product and using dummy programs to access its binary code [ … ] to create what Netezza’s own personnel reffered to internally as a “hack” version of Geospatial that would run, albeit very imperfectly, on Netezza’s new TwinFin machine [ … ] Netezza then delivered this “hack” version of Geospatial to a U.S. Government customer (the Central Intelligence Agency) [ … ] According to Netezza’s records, the CIA accepted this “hack” of Geospatial on October 23, 2009, and put it into operation at that time.”

      Reality is always more absurd, government agencies remain inept and corrupt even when shrouded in secrecy to cover up their missteps, and by the way, Kubernetes now flies on the F16.

    • DmitryOlshansky 1524 days ago
      Indeed.

      I think one of big problems in software development is that nobody measures the half-life of our assumptions. That is the amount of time it takes for half of the original assumptions to no longer hold.

      In my limited experience assumptions half-life in software could be easily as low as around one year. Meaning that in 5 years only 1/32 of original architecture would make sense if we do not evolve it.

    • chapium 1524 days ago
      Just add the explode modifier to your classes and you should be good.
  • mojuba 1524 days ago
    One other class of applications that don't really require garbage collection is HTTP request handlers if run as isolated processes. They are usually very short-lived - they can't even live longer than some maximum enforced by the server. For example, PHP takes advantage of this and allows you not to worry about circular references much.
    • sumanthvepa 1524 days ago
      I used to work at Amazon the late 90s and this was the policy they followed. The apache server module written in C would leak so much that the process would have to be killed every 10 requests. The problem with the strategy was that it required a lot of CPU and RAM to startup a new process. Amazons answer was to simply throw hardware at the problem. Growing the company fast was more important than cleaning up RAM. They did get round to resolving the problems a few years later wit better architectures. This to was an example of good engineering trade offs.
      • saagarjha 1524 days ago
        > The problem with the strategy was that it required a lot of CPU and RAM to startup a new process.

        It's not really kosher, but why not just keep around a fresh process that they can continually fork new handlers from?

        • Filligree 1523 days ago
          Setting it up was expensive, so there's a good chance it involved initializing libraries, launching threads, or otherwise creating state that isn't handled correctly by fork.
    • tyingq 1524 days ago
      "For example, PHP takes advantage of this"

      I imagine the various long-running PHP node-ish async frameworks curse this history. Though PHP 7 cleaned up a lot of the leaks and inefficient memory structures.

    • chapium 1524 days ago
      This is clearly not my subject area. Why would we be spawning processes for HTTP requests? This sounds awful for performance.

      My best guess is a security guarantee.

      • derefr 1524 days ago
        Not spawning, forking. Web servers were simple “accept(2) then fork(2)” loops for a long time. This is, for example, how inetd(8) works. Later, servers like Apache were optimized to “prefork” (i.e. to maintain a set of idle processes waiting for work, that would exit after a single request.)

        Long-running worker threads came a long time later, and were indeed intensely criticized from a security perspective at the time, given that they’d be one use-after-free away from exposing a previous user’s password to a new user. (FCGI/WSGI was criticized for the same reason, as compared to the “clean” fork+exec subprocess model of CGI.)

        Note that in the context of longer-running connection-oriented protocols, servers are still built in the “accept(2) then fork(2)” model. Postgres forks a process for each connection, for example.

        One lesser-thought-about benefit of the forking model, is that it allows the OS to “see” requests; and so to apply CPU/memory/IO quotas to them, that don’t leak over onto undue impacts on successive requests against the same worker. Also, the OOM killer will just kill a request, not the whole server.

        • mehrdadn 1524 days ago
          Thanks for that last paragraph, I'd never thought about that aspect of processes. Learned something new today.
      • tyingq 1524 days ago
        PHP these days doesn't fork and spawn a new process, though it does create a new interpreter context.

        In the old cgi-bin days, every web request would fork and exec a new script, whether PHP, Perl, C program, etc. That was replaced with Apache modules (or nsapi, etc), then later, with long running process pooled frameworks like fcgi, php-fpm, etc. Perl and PHP typically then didn't fork for every request. But did create a fresh interpreter context to be backward compatible, avoid memory leaks, etc. So there's still overhead, but not as heavy as fork/exec.

      • rgacote 1524 days ago
        The (web) world used to be synchronous. Traditional Apache spawns a number of threads and then keeps each thread around for x number or requests, after which the thread is killed and a new one spawned. Incredibly useful feature when you're on limited hardware and want to ensure you don't memory leak yourself out of existence. Modern Apache has newer options (and of course nginx has traditionally been entirely async on multiple threads).
      • barrkel 1524 days ago
        Killing a process is much safer than killing a thread, and the OS does cleanup.

        It's not great for maximizing performance but it's not 100s of milliseconds either, forking doesn't take long; what is slow is scripting languages loading their runtimes, but you can fork after that's loaded. If hardware is cheaper than opportunity cost of adding new features (rather than debugging leaks) it makes sense.

        • clarry 1523 days ago
          I measured less than half a millisecond to fork, print time, and wait for child to exit.

          http://paste.dy.fi/NEs/plain

          So forking alone doesn't cap performance too much; one or two cores could handle >1000 requests per second (billions per month).

  • matsemann 1524 days ago
    I made a project a few years back where I had really no idea what I was doing. [0] I had to read two live analog video feeds fed into two TV-cards, display them properly on an Oculus Rift and then take the head tilting and send back to the cameras mounted on a flying drone. I spent weeks just getting it to work, so my C++ etc was a mess. The first demo I leaked like 100 MB a second or so, but that meant that it would work for about a minute before everything crashed. We could live with that. Just had to restart the software for each person trying, hehe.

    [0]: https://news.ycombinator.com/item?id=7654141

  • FpUser 1524 days ago
    "Since the missile will explode when it hits it's target or at the end of it's flight, the ultimate in garbage collection is performed without programmer intervention."

    I just can't stop laughing over this "ultimate in garbage collection". What a guy.

    Btw we dealt a lot with Rational in the 90's. I might have even met him.

  • ggambetta 1524 days ago
    Of course it's also expected to crash, especially the hardware :)
    • Igelau 1524 days ago
      Remote execution? It was the top requested feature!
  • geophile 1524 days ago
    The problem, of course, is that the chief software engineer doesn't appear to be have any understanding of what is causing the leaks, and whether the safety margin is adequate. Maybe there is some obscure and untested code path in which leaking would be much faster than anticipated.

    To be sure, it is a unique environment, in which you know for a fact that your software does not need to run beyond a certain point in time. And in a situation like that, I think it is OK to say that we have enough of some resource to reach that point in time. (It's sort of like admitting that climate change is real, and will end life on earth, but then counting on The Rapture to excuse not caring.) But that's not what's going on here. It sounds like they weren't really sure that there would definitely be enough memory.

    • willvarfar 1524 days ago
      You are reading a lot into a short story. You don’t know that the engineer hasn’t had someone exactly calculate the memory allocations.

      Static or never-reclaimed allocations are common enough in embedded code.

    • clSTophEjUdRanu 1524 days ago
      Freeing memory isn't free, it takes time. Maybe it's not worth the time hit and they know exactly where it is leaking memory.
    • blattimwind 1524 days ago
      Actually the story implies the opposite

      > they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number.

  • b34r 1524 days ago
    I like the pragmatism. One thing that comes to mind though is stuff gets repurposed for unintended use cases often... as long as these caveats are well documented it’s ok but imagine if they were hidden and the missiles were used in space or perhaps as static warheads on a long timer.
  • MaxBarraclough 1523 days ago
    On such systems the same approach can be taken for a cooling solution. If the chip will fatally overheat in 60 seconds but the device's lifetime is only 45, there's no need for a more elaborate cooling solution.

    The always-leak approach to memory management can also be used in short-lived application code. The D compiler once used this approach [0] (I'm not sure whether it still does).

    [0] https://www.drdobbs.com/cpp/increasing-compiler-speed-by-ove...

  • kebman 1524 days ago
    The garbage is collected in one huge explosion. And then even more garbage is made, so that's why we don't mind leaks...... xD
  • tjalfi 1524 days ago
  • raverbashing 1524 days ago
    And that's a common mentality in hardware manufacturers as opposed to software developers (you just need to see how many survived)

    (Not saying that the manufacturer was necessarily wrong in this case and doubling the memory might have added a tiny manufacturing cost to something that was much more expensive)

  • wbhart 1524 days ago
    Missiles don't always hit their intended target. They can go off course, potentially be hacked, fall into the wrong hands, be sold to mass murderers, fail to explode, accidentally fall out of planes (even nuclear bombs have historically done this), miss their targets, encounter countermeasures, etc.

    Nobody is claiming that this was done for reasons of good software design. It's perfectly reasonable to suspect it was done for reasons of cost or plain negligence.

    There's a reason tech workers protest involvement of their firms with the military. It's because all too often arms are not used as a deterrent or as a means of absolute last resort, but because they are used due to faulty intelligence, public or political pressure, as a means of aggression, without regard to collateral damage or otherwise in a careless way.

    The whole point here is the blase way the technician responded, "of course it leaks". The justification given is not that it was necessary for the design, but that it doesn't matter because it's going to explode at the end of its journey!

    • willvarfar 1524 days ago
      A simple bump allocator with no reclaim is fairly common in embedded code.

      Garbage collection makes the performance of the code much less deterministic.

      A lot of embedded loops running on embedded in-order cpus without an operating system use cycle count as a timing mechanism etc.

      • wbhart 1524 days ago
        Right, but that isn't the argument that was being used here, which is my point. The way I read it, the contractor cared only enough to get the design over the line so the customer would sign off on it. Their argument was that you shouldn't care about leaks due to scheduled deconstruction, not because of a technical consideration.

        There exist options between no reclaim and using a garbage collector which could be considered, depending on the exact technical specifications of the hardware it was running on and the era in which it happened.

        But retrofitting technical reasoning about why this may have been done is superfluous. The contractor already said why they did it, and the subtext of the original post is that it was flippant and hilarious.

        • ncmncm 1524 days ago
          Fetishism is not compatible with sound engineering.

          "Cared only enough" is just your projection. The contractor knew the requirements, and satified the requirements with no waste of engineering time, and no risk of memory reclamation interfering with correct operation. The person complaining about leaks wasted both his time and the contractor's.

          • Dylan16807 1524 days ago
            You had a good comment going until the last sentence.

            When your job is performing an analysis of the code, five minutes asking for a dangerous feature to be justified is ridiculously far from a "waste of time".

  • kleiba 1524 days ago
    Seems a bit unlikely to me. Intuitively, calculating how much memory a program will leak in the worst case should be at least as much effort as fixing the memory leaks. And if you actually calculated (as in, proved) the amount of leaked memory rather than just by empirically measuring it, there's no need to install double the amount of physical memory.

    This whole procedure appears to be a bit unbelievable. And we're not even talking about code/system maintainability.

    • FreeFull 1524 days ago
      A memory allocator without the ability to free memory is a lot simpler and faster. Usually though, I'd expect to see static allocation for this sort of code, I'm not sure why a missile would have to allocate more memory on the fly.
      • StupidOne 1524 days ago
        Because he needed more memory mid-air? :)

        Not sure was it pun or no pun intended, but you gave me a good laugh.

    • daenz 1524 days ago
      >Intuitively, calculating how much memory a program will leak in the worst case should be at least as much effort as fixing the memory leaks.

      Why? I could calculate the average amount of leaking of a program much easier than I could find all the leaks. Calculating just involves performing a typical run under valgrind and seeing how much was never freed. Do that N times and average. Finding the leaks is much more involved.

      • kleiba 1523 days ago
        Did you read in my original post the distinction between calculating and measuring a leak?
    • nneonneo 1524 days ago
      Why is it hard to calculate? Suppose I maintain lots of complex calculations that require variable amounts of buffered measurements (e.g. the last few seconds, the last few minutes at lower resolution, some extrapolations from measurements under different conditions, etc.). Freeing up the right measurements might be really tricky to get right, and if you free a critical measurement and need it later you’re hosed.

      On the other hand, you can trivially calculate how many measurements you make per unit time, and multiply that by the size of the measurements to upper-bound your storage needs. Hypothetical example: you sample GPS coordinates 20 times per second, which works out to ~160 bytes/sec, 10000 bytes/min, or around 600KB for a full hour of flight. Easy to calculate - hard to fix.

      • ken 1524 days ago
        Are you taking into account memory fragmentation? Or the internal malloc data structures? If your record were just 1 byte more, it could easily double the total actual memory usage.

        Memory usage is discrete, not continuous. It's not as simple as calculating the safety factor on a rope.

        • cozzyd 1524 days ago
          If you don't free, malloc doesn't need all that overhead
    • barrkel 1524 days ago
      The control flow graph will most likely be a loop doing PID; I think it could be statically analysed.
      • kleiba 1523 days ago
        And why couldn't it be freed?
  • 32gbsd 1524 days ago
    It is all good until people start to depend on these memory leaks and then you are stuck with a platform that is unsupported.
  • lallysingh 1523 days ago
    Are these Patriots? Didn't they need a power cycle every 24 hours? Is this why?
  • simonebrunozzi 1524 days ago
    > the ultimate in garbage collection is performed without programmer intervention.

    Brilliant.

  • MrBuddyCasino 1524 days ago
    Why go through the trouble of

    a) calculating maximum leakage

    b) doubling physical memory

    instead of just fixing the leaks? Was it to save cycles? Prevent memory fragmentation? I feel this story misses the details that would make it more than just a cute anecdote.

    • daenz 1524 days ago
      "just fixing the leaks" can be a very time consuming process, involving hunting and refactoring (valgrind isn't perfect). It's very possible that just throwing more memory at it with the soft guarantee that the leak won't result in OOM may have been the best business decision for that particular contract. Of course it's not the "right" way to build a thing, but sometimes the job wants the thing now and "good enough."
    • mantap 1524 days ago
      It's possible it may have been running on bare metal without an OS. Maybe they didn't want to verify a memory allocator and just treated the whole program as one big arena. I presume by "calculating" they meant "run the program in worst case conditions and see how much memory it uses".
    • ajuc 1524 days ago
      It was probably more efficient. Fixing leaks often requires copying instead of passing pointers.
    • kelvin0 1524 days ago
      I feel the same way too, dunno why the down votes? In the absence of all other details it just seems like shoddy work, but of course reality is probably more nuanced ... which is what's missing from the story.
      • ratboy666 1523 days ago
        I think the story isn't nuanced. The program runs once, the missile explodes, garbage collection is done!

        No need for garbage collection, no need for "memory management". Not shoddy work. An expression of "YAGNI". The interesting thing (in my opinion), is the realization. The teller of the story went to the trouble of discovering that memory is leaking. She could have simply asked before engaging the work.

        FredW

    • qtplatypus 1524 days ago
      There is the cpu overhead of detecting where the memory goes out of scope and freeing it. So it can be a memory vs cup optimisation
    • samatman 1523 days ago
      It's a missile.

      The memory is going to fragment no matter what you do.

  • DagAgren 1524 days ago
    What a cute story about writing software to kill people by shredding them with shrapnel.
    • DoofusOfDeath 1524 days ago
      There's a difference between delighting in war vs. accepting it as sometimes the lesser of two evils. I'm okay with discussing the software-engineering considerations needed to support the latter.
    • daenz 1524 days ago
      Missiles are also used for defense to intercept threats.
      • ptx 1523 days ago
        Those threats are sometimes an attempt at retaliation by whoever was attacked earlier by those now defending against the counter-attack, who are now free to attack without fear of the consequences thanks to the missile defense system.
      • berns 1524 days ago
        Thank you. I hadn't thought of that possibility. Now I can join the others in discussing the optimal memory management strategy for missile controllers.
    • swalsh 1524 days ago
      What if the bomb is landing on someone who intends to kill you?
      • pietrovismara 1524 days ago
        Have you watched Minority Report? What could go wrong with preemptively punishing crimes!
        • colonCapitalDee 1523 days ago
          This is a strawman. Nobody uses missiles for law enforcement, the very idea is ridiculous. Presumably the

          > person who intends to kill you

          in this context is a terrorist hiding a cave somewhere, or a certain Iranian general. Now, it's definitely debatable if those striking those targets is morally correct (and I usually don't believe that it is), but it's silly to equate a military strike with the type of law enforcement seen in Minority Report.

          • DagAgren 1522 days ago
            A "terrorist hiding in a cave" is the actual strawman here.
            • colonCapitalDee 1520 days ago
              Uhh who do you think missiles are usually launched at?
              • DagAgren 1519 days ago
                People going about their lives just like you and me, surrounded by other people, mostly.

                They may be terrorists, but that does not make their daily lives much different to ours.

  • djsumdog 1523 days ago
    My undergraduate mentor took a co-op position one year in Huntsville, Alabama. He told me about 6 processor missile guidance systems that cost tens of thousands of dollars ... all to guide a missile to where it gets blown up.
  • zozbot234 1524 days ago
    (1995) based on the Date: and (plausibly) References: headers in the OP.
  • lala26in 1524 days ago
    One reason I open HN almost everyday is some top items consistently catch my attention. They are thought provoking. Today's (now) HN I see 3-4 such items. :)
  • cryptoscandal 1523 days ago
    yes