The search for easier safe systems programming

(sophiajt.com)

193 points | by praseodym 11 days ago

20 comments

  • estebarb 11 days ago
    I don't get why people are scared of GC. After working in embedded software, using mostly C, it was evident from profiling that C programs spend a lot of time allocating and releasing memory. And those programs usually release memory in place, unlike GC languages where the release is deferred, done in parallel, and with time-boxed pauses. Most "systems" programs use worse memory management strategies than what a modern GC actually offers.

    Sure, some devices require using static memory allocations or are quite restricted. But a lot of other "system programming" targets far more capable machines.

    • refulgentis 11 days ago
      I was stunned how many engineers I met at Google who didn't know memory allocation takes a ton of time, didn't fundamentally believe it when you told them, and just sort of politely demurred on taking patches for it - first get me data, then walk me through the data, then show me how to get the data myself, then silence, because profiling ain't trivial.

      There's a 3x speedup sitting in a library because it's in a sea of carefully optimized functions manipulating floats, and someone made a change to extract a function that returned an array.

      It was such a good Chesterton fence: change seemed straightforward and obvious, better readability. But, it had cascading effects that essentially led to 9 on-demand array allocations in the midst of about 80 LOC of pure math.

      In my experience, there's hangover from two concepts:

      - Android does garbage collection and Android is slower than iOS

      - You can always trade CPU time for memory and vice versa (ex. we can cache function results, which uses more memory, but less CPU)

      • toast0 11 days ago
        > Android does garbage collection and Android is slower than iOS

        There's a lot of things going on that make this the case.

        Android was built with unrestrained multitasking; there's a lot of restraint now, but there's probably still a ton of stuff going on from unrelated processes while your foreground app is chugging along. iOS locks down background execution, and I suspect system apps have limited background execution as well. Less task switching and less cache pollution help a lot.

        iOS only runs on premium phones. Even Apple's lower priced phones are premium phones. Android isn't necessarily quick on premium phones, but it can be much worse on phones designed to a $50 price point.

        IMHO, A large issue is that the official UI toolkits are terribly slow. I'm more of a backend person, but I put together a terrible weather app, and first paint is very slow, even though it starts with placeholder data (or serialized data from a previous run); using an HTML view was much faster because it only makes one toolkit objeft and more capable because css layout lets you wrap text around an image nicely. Maybe there's some trick I'm not aware of to make 'native ui' not terrible on Android... but that experience helped me understand why everything is so slow to start.

        • mst 11 days ago
          If it's slower than a webview, I strongly suspect you're holding it wrong.

          However "if you do it the naive/obvious way, it performs this badly" is not a good property of a system and still probably explains why so many things are dog slow to get going.

          (I doubt I'd do any better than you did myself, mind, I'm pretty sure I have even less idea what I'm doing on android than you do)

          I'd be willing to bet the price of a pint that if either of us flailed around on iOS instead we'd probably end up with much better startup time even hardware-for-hardware because the easiest path for us to flail down is probably much saner than the equivalent on android.

          (insert rant about APIs needing to make sure the natural path that a newbie takes is also a path that produces -good- results even if they'll still need to git gud to achieve -excellent- results)

          • toast0 11 days ago
            > If it's slower than a webview, I strongly suspect you're holding it wrong.

            I sincerely hope you're right. But I was following the path as best I could, so I'm just not sure how it could have gone so wrong. I do 100% agree with your assessment of my flailing though. ;)

            It wasn't the only source of frustration (the weather API I was using looked good, but is actually crap, and their support blew me off when I pointed out data inconsistencies, so that's ugh). So it was easy to discard. Back to weather prediction by induction.

            • refulgentis 11 days ago
              I was an iOS dev for 7 years before G, then iOS app for Android watches for a year or two, then Android proper for...4 years? 100% cosign on the UI toolkit being.......ugh. There's a general saying internally at Google "$X is deprecated and $Y isn't ready yet", and I think that applies here, but I don't know Android UI as well as I'd like before damning it:

              I wrote Java + layouts for the one big UI feature I did, then was one of the first to do Kotlin, then moved to Flutter* from then on.

              In that vein: I cannot describe intensely enough how much I disliked Kotlin, and I get the vibe compose is not quite ready yet and isnt really moving the ball forward, as much as it's happy to be at general parity with SwiftUI. I really wish Flutter "won", it's such a beautiful and pleasant experience. Your job is a different job with hot reload.

              * Note: that was very unofficial, im almost certainly the only person who did on Android who was writing flutter, and it was because I ended up doing design system heavy work where having a web app to show designers things became paramount, no one was expecting much work on Android proper from me

            • mst 10 days ago
              Reminds me of an entry from my quotefile:

              < HellKat> the clock itself is set to UTC but it has a weather facility too - if you set the GPS coordinates

              < HellKat> temps right, weather however is wrong, comp says cloudy, looking outside says raining

      • pjmlp 11 days ago
        > Android does garbage collection and Android is slower than iOS

        Thing is, not only has Android runtime witness several JIT and GC refactorings since Darwin was created, originally worse than Sony and Nokia's J2ME implementations regardless of Google's marketing otherwise, there are so many factors contributing to Android's perceived slowness from bad coded UIs, sloppy programing, cheapstakes OEMs with low performance chips, and so on.

        On the other hand on iOS side, due to lack of GC and OS paging, when memory goes bad, applications just die.

      • hgs3 10 days ago
        > memory allocation takes a ton of time

        Depends how you define "a ton of time". Good general purpose memory allocation algorithms, like the TLSF algorithm [1], guarantee a O(1) bounded response time suitable for real-time use. To your example however, if someone introduces extra computation into math-heavy hot-looping code, then that is just sloppy development as their adding extra computation. That extra computation being memory allocation is tangential.

        [1] https://www.researchgate.net/publication/4080369_TLSF_A_new_...

        • refulgentis 10 days ago
          In this context, it's not tangential, it's core.

          I described observing programmers I've worked with struggled to understand that memory allocation could *ever* have a significant computation cost.

    • flohofwoe 11 days ago
      > that C programs spend a lot of time allocating and releasing memory

      ...then this code didn't get the point of manual memory management, which is to minimize allocations and move them out of the hot path. Unfortunately a lot of C and C++ code is still stuck in a 1990's OOP model where each individual object is explicitly constructed and destroyed, and each construction and destruction is linked to an alloc/free call. If you do this (or reach for reference counting, which isn't much better), a GC is indeed the better choice.

      • estebarb 11 days ago
        Moving out the allocations/releases out the hot path seems too much like a GC. Also, not all programs can be easily optimized moving the allocations: databases are an example, usually you don't know how much rows are you going to touch until you process the query.
        • pjc50 10 days ago
          > Moving out the allocations/releases out the hot path seems too much like a GC

          Absolutely not. It just requires a certain amount of planning and puts work on the callers to ensure that they prepare a suitable allocation somewhere, which may be on the stack or inside some structure or array that already exists.

          The limit case is the common embedded style of statically allocating everything. Often combined with a ban on recursion and (manually or automatically enforced) limits on stack size. If you don't have a lot of bytes you can plan your usage of each of them individually.

          MISRA rule 20.4 just plainly states "Dynamic heap memory allocation shall not be used." The very opposite of GC.

          https://wwwfiles.iar.com/superh/guides/EW_MISRAC2004Referenc...

        • flohofwoe 11 days ago
          Tigerbeetle is famously a database that doesn't do dynamic memory allocation at all (after startup):

          https://tigerbeetle.com/blog/a-database-without-dynamic-memo...

        • Nevermark 10 days ago
          Technically, GC can be part of manual memory management. The "manual" doesn't mean any algorithm is ruled out - it just means the developer mindfully chooses the right algorithm for the right situation.

          As apposed to the language, compiler or system always making those decisions.

    • pjmlp 11 days ago
      It is the consequence of urban performance myths being propagated by gut feeling.
    • kbolino 11 days ago
      If a "systems" program can afford GC, then I would consider it mischaracterized. The whole premise of "systems" programming IMO is that you need the code to behave exactly in the way specified, without unexpected preemption (i.e., preemption can only occur at controlled locations and/or for controlled durations). I think a lot of "embedded" software really isn't "systems" software, though the two get conflated a lot.

      Note that GC cannot possibly promise "time-boxed pauses" in the general case; either allocations are allowed to outpace collections, in which case a longer collection pause will eventually be required to avoid memory exhaustion, or else allocations must be throttled, which just pushes an unbounded pause forward to allocation time.

      • pjmlp 11 days ago
        Xerox PARC, ETHZ, DEC Olivetti, Microsoft Research, Genera, and even Bell Labs had another understanding of systems programming.
        • kbolino 11 days ago
          Do tell
          • pjmlp 11 days ago
            Xerox PARC => Smalltalk, Mesa/Cedar, Interlisp-D

            ETHZ => Oberon, Oberon-2, Active Oberon, Component Pascal (via Oberon Systems, a startup spinoff out of ETHZ), Oberon-07.

            DEC Olivetti => Modula-2+, Modula-3

            Microsoft Research => Singularity, Midori

            Genera and TI => Lisp Machines

            Bell Labs => C@+ (really confusing name, one of the few sources are a couple of DDJ articles), Limbo

            • kbolino 10 days ago
              I am quite ignorant about how these systems actually worked. I have heard of Lisp Machines before; the Alto and Lilith seem similar in that they're purpose-built hardware focused on running particular languages (or families of languages). The others are kind of hard to find relevant information about. My cursory understanding of these environments is that the hardware did a lot "more" than modern micros and so the system software could enjoy many luxuries that a typical modern operating system has to build for itself. They also didn't work well with languages they weren't designed for.
              • pjmlp 10 days ago
                Doesn't change the fact that they were full stack OS experiences written in those languages, covering most scenarios that everyone keeps claiming only C can do.

                The way they ended up not gaining traction had more to do with management decisions, company acquisitions and hardware costs, than the technology stacks themselves.

                The "C" for systems programming isn't only pure C as defined by K&R C, or ANSI/ISO C, rather C + Assembly, or compiler specific extensions for inline Assembly, pragmas, intrinsics, and keywords not part of any standard.

                D and C# are two modern examples of GC based languages with similar capabilities in intrinsics.

                • kbolino 10 days ago
                  I think with modern hardware you could significantly reduce the amount of C code but I'm not sure you could eliminate it entirely. Rust is trying to do just that but IMO isn't really comparable to the languages you mentioned. Maybe if microkernels had taken off and/or hardware drivers were written with more abstract interfaces, things might have been different even on the much cheaper hardware that won out.
                  • yencabulator 10 days ago
                    > significantly reduce the amount of C code but I'm not sure you could eliminate it entirely

                    Theseus is a research OS written in Rust that seems to include C purely to show how to support programs written in C (libc and a demo app).

                    https://github.com/theseus-os/Theseus

                    It contains some 1000+ lines of ASM for the early boot from BIOS, that look like they originate from https://os.phil-opp.com/minimal-rust-kernel/ -- I think UEFI boot would have been doable in pure Rust (https://github.com/rust-osdev/uefi-rs).

                    • kbolino 9 days ago
                      Rust certainly provides much stronger guarantees than C, though some of its features aren't available (or at least don't come batteries included) in a freestanding (no_std) environment. The borrow checker is also a lot less forgiving than a garbage collector.
                  • pjmlp 10 days ago
                    There are plenty of OS written in C++, already that is an improvement over bare bones C.

                    For example, Meadow uses a C++ mikrokernel, with everything else written in C#.

                    https://www.wildernesslabs.co/

                    • kbolino 10 days ago
                      I'd be surprised if they can use all of C++'s advantages in a freestanding environment. Even Rust struggles there though at least it's a first-class option.
  • pjmlp 11 days ago
    This is a common trend among D, Chapel, Vale, Hylo, ParaSail, Haskell, OCaml, Swift, Ada, and now June.

    While Rust made Cyclone's type system more manageable for mainstream computing, everyone else is trying to combine the benefits of linear/affine type systems, with the productivity of automated resource management.

    Naturally it would be interesting to see if some of those attempts can equally ping back into Rust's ongoing designs.

    • brabel 11 days ago
      A few other languages I think deserve a mention:

      * Carp: https://github.com/carp-lang/Carp

      * Nim: https://nim-lang.org/

      * Zig: https://ziglang.org/

      * Austral: https://borretti.me/article/introducing-austral

    • arka2147483647 11 days ago
      The problem for systems/low-level programming is that you want high-performance, and manual-control of resource management. As such, automated resource management can often look like a problem, instead of a feature. I think there is a deeper disconnect here between the language designers and the programmers in the trenches.
      • winternewt 11 days ago
        Yes if I read the article right, every object is being allocated on the heap. That is a no-go for systems programming as far as I'm concerned.
        • christophilus 11 days ago
          I read it the same way you did. But, I’d be really surprised if there was no stack allocation in the language, given the author’s experience.
        • pjc50 11 days ago
          In an arena allocator, the stack can be just a special case arena that gets discarded automatically for you on function exit.
        • DanielHB 11 days ago
          My gut feeling agrees with you but I would really like more detailed reasons why this is the case. Is memory fragmentation that big of an issue? Are heap allocations more expensive somehow (even if memory is not fragmented yet)? Is there something else? Does re-arranging memory in the heap makes performance unpredictable like GC languages?
          • alchemio 11 days ago
            Memory allocation is slow and undeterministic in perf. Some allocations also require a global lock on the system level. It’s also a point of failure if the allocation doesn’t succeed, so there’s an extra check somewhere. Furthermore if every object is a pointer you get indirection overhead (even though small but existent). Deallocation as well incurs an overhead. Without a compacting gc you run into memory fragmentation which further aggravate the issue. All of this overhead can be felt in tight loops.
          • bjourne 11 days ago
            Due to the quick fit algorithm, fragmentation is no longer an issue for memory allocators. Heap allocations are still a bit slower than stack allocations since you need some way to release memory. Stack allocations are released at virtually zero cost (one assembly instruction). Hence sophisticated compilers perform escape analysis to convert heap allocations into cheaper stack allocations. But escape analysis, like all program analysis, is conservative and wont convert as many allocations as a human programmer could.

            However, in the grand scheme of things heap vs stack allocation is minuscule. Many other factors are much more important for performance.

          • winternewt 10 days ago
            For one thing, allocating every object on the heap leads to a lot of cache misses because the data you're working with is not contiguous in memory. It may also make it harder for the CPU to do speculative fetches from memory because it needs to resolve the value of a pointer before it knows where to fetch data. With the stack, the address is much more obvious since it's all constant offsets relative to the frame pointer.

            Also, heap allocation is unpredictable. It is more likely to cause unexpected page faults or thread congestion (multiple threads often share the same heap so they need to synchronize access to memory book-keeping structures). Especially when it comes to kernel drivers, a page fault can lead to a deadlock, infinite recursion, or timeouts.

            I'm not saying heap is always bad, not even that it's bad most of the time. But if a language doesn't at least give you the _option_ of having objects live on the stack, I wouldn't consider it a serious systems programming language.

          • marcosdumay 11 days ago
            There is no inherent difference. It's all memory.

            That said, as a sibling already pointed out, it's standard to control stack allocation with a single counter. It's kind of standard to control heap allocation with an index and a lot of book-keeping.

            But you are allowed to optimize the heap until there's no difference.

          • anonymousdang 11 days ago
            [flagged]
      • bluetomcat 11 days ago
        Stuff like memory pools, arena and slab allocators have been in widespread use in C/C++ systems programming for decades. It looks like designers of hip languages are reinventing that stuff in compilers that try to protect you from yourself.
        • Nevermark 10 days ago
          To add "verifiable resource correctness/safety" to a language, all the non-trivial coordination algorithms for memory, threads, exceptions, ..., etc., need to be reinvented.
    • eru 11 days ago
      Agreed. (Though I wouldn't exactly call Haskell a 'systems programming language'. At least not in the sense the article uses it.)

      Yes, linear / affine types and uniqueness types can give you a lot of control over resources, while also allowing you to preserve functional semantics.

      I would like to see a version of Haskell that doesn't just track IO, but also track totality. Ie unless your function is annotated with a special tag, it has to be guaranteed to return a value in finite time, ie it has to be total. If you tag it with eg 'Partial', you can rely on laziness.

      That's very similar to how functions in Haskell can't do any IO, unless they are tagged with 'IO'.

      (I know that Haskell doesn't see 'IO' as a tag or annotation. But it behaves like one in the sense I am using here.)

      • skybrian 11 days ago
        “Finite time” isn’t much of a guarantee. A million years is finite. Totality checking is useful in language for doing proofs, where you need a guarantee that a function always returns a value even though you never run it.

        In languages for doing practical calculations, a progress dialog and a cancel button are more useful than a totality guarantee. It should be easier to make complex, long-running calculations safely cancellable.

        (Still true with laziness, though it changes a bit. At some point you will ask for a value to be calculated.)

        • Nevermark 10 days ago
          Totality guarantees won't ensure a timely response. But they ensure that a function that should always return a value will always return a value. I.e. they eliminate one source of bugs.

          So a default constraint on functions to be total, would be sensible and helpful.

          To be able to do the opposite, constrain a function or continuation to never terminate (outside of program termination), would be useful too. Albeit in much fewer contexts.

          Any constraint that forces implementations to match respective design intentions is good.

          • skybrian 10 days ago
            To put it a different way, a totality guarantee ensures that a function will always return a value in theory, but not in practice. To get the theoretical guarantee, you need to eliminate all possible runtime errors, including things like divide-by-zero errors.

            It’s kind of pain when you’re not making a theoretical argument. You might still want to eliminate all divide by zero errors. Satisfying the compiler of this using a machine-checked proof might be more trouble than it’s worth, though. It might require you to use a language whose types support proofs.

            • eru 10 days ago
              You don't need to eliminate all possible runtime errors. You just need to make sure they are handled and accounted for.

              That can take the form of something like `try_divide` that returns a `Maybe` that you need to pattern match on. Or it can take the form of trust-me-bro via the equivalant of UnsafePerformIO, where you know that something is safe and you want to let the compiler know, but you can't prove it to the compiler.

              You can also deal with errors by returning an `Either`. That doesn't mess with any totality guarantees.

              • skybrian 10 days ago
                You're talking about error handling in practical programs. When I started out writing "theoretical guarantee," I meant in a proof, written in a proof language.

                Reporting an error to a function's caller isn't really a thing in a proof. A function proves that if any input exists, its result exists. Returning an error means "an error might exist." We don't want to return that, we want to eliminate the error from our proof, so that it proves that the result exists for any input. Returning an 'Ether' type might be useful in a proof, but a 'Maybe' type seems dubious. We already assume that 'None' exists, so proving that 'None' might exist is useless.

                We don't have the same concerns in proofs and practical programs.

                • eru 10 days ago
                  No, no, this entirely pragmatic _and_ theoretically sound.

                  In a practical program you want to know that you in finite time you either get one of the declared errors, or you get a positive result. Using an `Either` type allows even a relatively weak type system to be of use here.

                  There's no 'error' in your proof: your proof can't do anything about eg running out of disk space.

                  Now, using an `Either` type for things like division-by-zero is a bit of a cop-out, and a compromise foisted on us by a weak type system. And you are right that in something like Agda you would hopefully never have to stoop so low. (Though even there, you can't proof everything. Eg if your program enumerates differences between adjacent prime numbers, you can't prove that it will (or won't) run out of 2s.)

                  • skybrian 10 days ago
                    I feel like we're talking past each other somehow but I'll give it one more try.

                    In previous post you quoted the Dhall documentation:

                    > The main benefit of evaluation being finite is not to eliminate long-running programs but to make them significantly less probable. In practice, you will discover that you will rarely author a configuration file that takes a long time to evaluate by accident.

                    This might be practical advice, but it's not theoretically sound. It's drawing a conclusion that wasn't proven. Isn't it kind of weird to write proofs without caring about the result of the proof?

        • eru 10 days ago
          > “Finite time” isn’t much of a guarantee. A million years is finite.

          You are technically right, but in practice, finite time does help a lot. To quote from the Dhall documentation at https://docs.dhall-lang.org/discussions/Safety-guarantees.ht...

          > Note that a “finite amount of time” can still be very long. For example, there are some short pathological programs that take longer than the heat death of the universe to evaluate. The main benefit of evaluation being finite is not to eliminate long-running programs but to make them significantly less probable. In practice, you will discover that you will rarely author a configuration file that takes a long time to evaluate by accident.

          > It should be easier to make complex, long-running calculations safely cancellable.

          Just send kill 9 to your program, if you want to cancel it? That seems like an entirely separate topic? (Especially if your calculation is pure, it's always safe to cancel it.)

          • skybrian 10 days ago
            Yes, killing the process is how cancellation of a batch job is normally done, for example in shell scripts. But it’s a heavyweight approach. Backend servers don’t use a process per request anymore and they do need ways to cancel a request.

            Go’s Context API handles cancellation and timeouts at the request level, but it has to be coded manually; a badly-behaved goroutine could still leak and consume resources long after it’s cancelled.

            So it might be nice to have automatic support at the language level, to make guaranteed cancellation happen by default. This is the approach taken with structured concurrency. To preserve the invariant that a child task doesn’t outlive the parent, there needs to be a way to kill any unneeded child tasks when a function returns early.

            Totality checks seem like a poor substitute when what you actually want is to prevent resource leaks.

            I haven’t used Dhall, but it seems like a config language has different concerns?

      • Y_Y 11 days ago
        Although I complain about them in a sibling comment, Agda and Idris do meet the criterion of "Haskell-like with totality checking".
        • eru 11 days ago
          Yes. But I would want something a bit lighter-weight than them for this job. I'd want just barely more machinery and conceptual overhead than current Haskell has.
      • marcosdumay 11 days ago
        > Though I wouldn't exactly call Haskell a 'systems programming language'. At least not in the sense the article uses it.

        I don't disagree, but...

        Except for excutable sizes and maybe unpredictable performance (performance unpredictability on Haskell has the same shape of UB on C, in that it creeps in, but it's way easier to keep away), there's actually no feature missing.

        You can argue for C-like speed (instead of Java-like), but the stuff on the article doesn't have it either.

        • eru 10 days ago
          'Systems programming language' is a bit of an overloaded term at best. I was purely going by the (implied) usage in the article.

          People have written kernels and unikernels in Haskell. So in that sense it's definitely a systems programming language.

    • Y_Y 11 days ago
      Haskell is my favourite language ever, and the (vanilla) type system is a joy to use. Inevitably though I find myself wish for some dependent typing. Agda and Idris aren't quite there yet imho, but there's definitely a need for something that's at least as easy to work with as Haskell but a little more powerful (and ideally not some arcane GHC megahack).
      • valenterry 11 days ago
        Same here, using Scala in a Haskell-like way and while Scala has more dependent typing than Haskell (ignoring liquid extension), it's still not sufficient and practical. I think we are quite far from having fully featured dependent type systems in a mainstream language. Maybe typescript comes closest so far.
    • valenterry 11 days ago
      I think the resource management issue is mostly solved in practice when using pure functional programming.

      However that comes with performance drawbacks (or at least unpredictable/unreliable performance) which creates the need for languages like Rust. It's great to see the progress in those languages as well.

      • eru 11 days ago
        > I think the resource management issue is mostly solved in practice when using pure functional programming.

        Well, in the sense that garbage collectors solve memory management. But pure functional programming in the Haskell sense doesn't really manage file handles for you, or database connections.

        You could have more predictable performance in Haskell, if you didn't have to deal with mandatory laziness for some functions.

        (Basically, in this hypothetical variant of Haskell, by default functions would be safe to be evaluated in any order, strict or lazy or whatever. If you want functions that need to be evaluated lazily, then you'd need to declare that; just like today you already need to declare that your function might have side effects.)

        The compiler would then be free to re-order evaluation a lot more, and pick predictable, fast performance.

        • valenterry 11 days ago
          > Well, in the sense that garbage collectors solve memory management. But pure functional programming in the Haskell sense doesn't really manage file handles for you, or database connections.

          No, I was actually talking about the latter and not about memory management at all.

          In Scala we even achieve this on the library level. (e.g. https://zio.dev/reference/resource/ which also compares the existing problematic try/catch with its own approach)

          And despite Scala being strict, the GC still makes it unsuitable for many cases where you just need something like Haskell.

        • marcosdumay 11 days ago
          > But pure functional programming in the Haskell sense doesn't really manage file handles for you, or database connections.

          That's what monads are for.

          Haskell doesn't require that you use automatic management. But it does absolutely have a solution.

          • eru 10 days ago
            How are monads useful for managing file handles or database connections?

            You might want to look into 'indexed monads', they can perhaps help with that. But they aren't a type monads (it's the other way round, vanilla monads can be seen as a trivial kind of 'indexed monads', I guess).

            • valenterry 10 days ago
              The encapsulate direct access to those resource and therefore allow a properly controlled lifecycle because usage is then local and cleanup in case of errors is automatic. The caller pretty much can't do anything wrong. No linear types or indexed monads needed actually.
        • mrkeen 11 days ago
          > If you want functions that need to be evaluated lazily, then you'd need to declare that

          Not a sensible default.

          You cannot ask a strict function to behave lazily. But you can ask a lazy function to behave strictly.

          • throwaway17_17 11 days ago
            I am unsure where the proof, or even supporting argument, would come from for such a statement. The literature is lengthy and deep, but I would sum up the current understanding and most popular argument against your stated position by citing Robert Harper (who I admit is notoriously anti-Laziness by default) in his commentary to the 2nd Edition to Practical Foundations of Programming Languages:

            "The problem is the very idea of a lazy language, which imposes a ruinous semantics on types. An eager semantics supports the expected semantics of types (for example, the natural numbers are the natural numbers) and, moreover, admits definition of the lazy forms of these types using suspension types (Chapter 36). A value of a suspension type is either a value of the underlying type, or a computation of such a value that may diverge. There being no representation of the eager types in a lazy language, it follows that, eager languages are strictly more expressive than lazy languages." [1]

            This paragraph is then followed by a link to Harper's "PCF by Value" supplement/errata to the above referenced PFPL, where such a statement is reinforced. [2]

            [1] - https://www.cs.cmu.edu/~rwh/pfpl/supplements/commentary.pdf

            [2] - https://www.cs.cmu.edu/~rwh/pfpl/supplements/pcfv.pdf

            • mrkeen 11 days ago
              > I am unsure where the proof, or even supporting argument

              You make a lazy function strict by asking for its output now rather than later.

              > An eager semantics supports the expected semantics of types (for example, the natural numbers are the natural numbers).

              What a (counter)example! In lazy semantics the natural numbers are the natural numbers. In eager semantics the natural numbers are stack overflow.

              > moreover, admits definition of the lazy forms of these types using suspension types

              This is the required modification that your library maintainer would need to make if he had produced a strict library that you wanted to use lazily. Hence "You cannot ask a strict function to behave lazily."

              > There being no representation of the eager types in a lazy language

              Because the lazy representation is the same as the eager representation in the source code. It's up to the caller to demand values now, or later, hence "But you can ask a lazy function to behave strictly."

              • eru 10 days ago
                > You make a lazy function strict by asking for its output now rather than later.

                That doesn't mean you can evaluate the function strictly. Have a look at this example:

                    f = head . map (2+)
                
                    x = f [5, undefined]
                
                You can ask for x right away (it's 7), but if you evaluate f strictly, it still diverges.

                In any case, my suggestion is not to deal with arbitrary functions; but to restrict the set of untagged functions to those where any evaluation order works. 'Works' in the sense that any evaluation order takes a finite amount of time.

          • eru 10 days ago
            > You cannot ask a strict function to behave lazily. But you can ask a lazy function to behave strictly.

            Huh? I suggest that when you don't annotate anything, you can only use constructs that can be evaluated in any order (including both lazy and strict or any other order). So the compiler has maximal freedom.

            So your supposed impossibility doesn't come into play: we are not dealing with arbitrary functions.

            If your function needs a specific evaluation order, you'd need to tag that.

            • mrkeen 6 days ago
              You can walk an expression and replace its thunks with the values that those thunks deferred the evaluation of.

              To do the reverse would require uncomputation. You can't turn all the natural numbers back into the expression [1..].

  • nercury 11 days ago
    struct Node<'a, 'b, 'c> { data1: &'a Data data2: &'b Data data3: &'c Data }

    Wow. It's like teaching C++ and starting from SFINAE. Or C# and starting from type parameter constraints.

    Please think of a real-world examples when teaching stuff. I am very eager to see the program a beginner would need to write that requires: 1) references in a struct; 2) 3 separate lifetime parameters for the same struct.

  • noelwelsh 11 days ago
    Effect systems strike again! They've come up a few times recently on HN, and region-based memory management is another problem they can solve. This paper describes a type system that region-based memory management falls out of as a special case: https://dl.acm.org/doi/10.1145/3618003
    • mst 11 days ago
      I was quite fascinated by koka's use of refcounting during compilation to be able to do June's 'recycle' trick automatically (i.e. if you consume or discard the last reference to something during an operation that returns a new 'something' it re-uses the memory of the now-defunct one).
  • pron 11 days ago
    > Rust's focus on embedded and system's development is a core strength. June, on the other hand, has a lean towards application development with a system's approach. This lets both co-exist and offer safe systems programming to a larger audience.

    I think this is a mistake, both on June's part and on Rust's. All low-level languages (by which I mean languages that offer control over all/most memory allocation) inherently suffer from low abstraction, i.e. there are fewer implementations of a particular interface or, conversely, more changes to the implementation require changes to the interface itself or to its client. This is why even though writing a program in many low-level languages can be not much more expensive than writing the program in a high-level language (one where memory management is entirely or largely automatic), costs accrue in maintenance.

    This feature of low-level programming isn't inherently good or bad -- it just is, and it's a tradeoff that is implicitly taken when choosing such a language. It seems that both June and Rust try to hide it, each in their own way, Rust by adopting C++'s "zero-cost abstraction approach", which is low abstraction masquerading as high abstraction when it appears as code on the screen, and June by yielding some amount of control. But because the tradeoff of low-level programming is real and inescapable, ultimately (after some years of seeing the maintenance costs) users learn to pick the right tradeoff for their domain.

    As such, languages should focus on domains that are most appropriate for the tradeoffs they force, while trying to aim for others usually backfires (as we've seen happen with C++). Given that ultimately virtually all users of a low level language will be those using it in a domain where the low-level tradeoff is appropriate -- i.e. programs in resource-constrained environments or programs requiring full and flexible control over every resource like OS kernels -- trying to hide the tradeoff in the (IMO) unattainable hope of growing the market beyond the appropriate domain, will result in disappointment due to a bad product-market fit.

    Sure, it's possible that C++'s vision of broadening the scope of low-level programming was right and it's only the execution that was wrong, but I wouldn't bet on it on both theoretical (low abstraction and its impact on maintenance) and empirical (for decades, no low-level languages have shown signs of taking a significant market share from high-level languages in the applications space) grounds. Trying to erase tradeoffs that appear fundamental -- to have your cake and eat it -- has consistently proven elusive.

    • pjmlp 11 days ago
      I beg to differ, as shown on the linage of system languages started with Pascal dialects, Mesa, Cedar, Modula variants, which in a way is what Zig is going back to, with a revamped syntax for the C crowd.

      High level system languages, that provide a good programming confort, while havig the tools to go under the hood, if so desired.

      Being forced to deal with naked pointers and raw memory in every single line of code like C, is an anomaly only made possible due to industry's adoption of UNIX at scale.

      • pron 11 days ago
        But you're using a different definition of high and low level than I do, and are thus missing my point. I define a high-level language as one that trades off control over memory resources in exchange for higher abstraction (by relying on automatic memory management, AKA garbage collection -- be it based on a refcounting or a tracing algorithm -- as the primary means of memory management), while a low-level language makes the opposite tradeoff. By this definition, C++ is just as low-level as C, regardless of the use of raw vs. managed pointer.

        The question I'm interested in here is not which features a low-level language should add to make it more attractive for low-level programming, but should it add features that are primarily intended to make more attractive for application programming. The declining share of low-level languages (by my definition) for application programming over the last 30 year, leads me to answer this question in the negative. This is a big difference between the approach taken by low-level languages like C++ and Rust, which try to appeal to application programming, so far unsuccessfully, and low-level languages like Zig, which don't. So far, neither Rust nor Zig have been able to gain a significant market share of low-level programming, let alone application programming, which makes judging the success of their approach hard, but C++ has clearly failed to gain significant ground in the application space despite achieving great success in the low-level space.

        The reason I'm focusing on this question is that this article specifically calls out an attempt by the June language to appeal to application programmers, and I claim that C++/Rust's "zero cost abstraction" approach does the same -- it attempts to give the illusion of high abstraction (something that I believe isn't useful for low-level programmers, who make the low-abstraction tradeoff with their eyes open) without actually providing it (clients are still susceptible to internal changes in implementation).

        • pjmlp 11 days ago
          Given the uptake in GUI frameworks by C++ during the 1990's and game engines, CUDA, and the whole RIIR wave in Web tooling, I would state that even by your definition of high level, they are being both pretty much succesful.

          Likewise Rust being adopted by Linux kernel, Microsoft (now the official language for Azure infrastructure, with C and C++ requiring clearance for new projects), Google on Android and Chrome/V8, it is also quite successful for low level coding, alongside the whole GCC/LLVM infrastructure, written in C++.

          For example, if I deploy to Vercel or Netfly and don't want to pay for additional infrastructure for webservers, while caring for performance, my options are nodejs addons/serverless in C++/Rust, or eventually Go (which I rather not).

          • pron 11 days ago
            > I would state that even by your definition of high level, they are being both pretty much succesful.

            C++ certainly had a high market share in the application space in the early nineties, which it has since lost. The market share of low level languages in the application space overall has dropped by a lot since then and is showing not even a hint in reversal of the trend.

            > Likewise Rust being adopted by Linux kernel, Microsoft (now the official language for Azure infrastructure, with C and C++ requiring clearance for new projects), Google on Android and Chrome/V8, it is also quite successful for low level coding, alongside the whole GCC/LLVM infrastructure, written in C++.

            I am not doubting that a language that follows C++'s design philosophy may ultimately take a large market share from C++ or from C (although if Rust is the language to do that, its rate of adoption is alarmingly low, but that's beside the point). My point is that, despite trying, all such languages combined have not shown any success in taking a significant market share from high-level languages in the application space. Surely if the problem wasn't the philosophy but the execution, and if Rust is the language that successfully cracked the code for low-level/application convergence, we should have seen some significant change in trend over the past decade, but it's just not there. At some point, hypotheses regarding utility and value need to become very visible in the market -- which cannot be completely irrational given economic pressure and competition -- and you can try to find a trend in noisy volatility only for so long. If C++'s 40 years and Rust's 10 have failed to to prove the hypothesis that low level languages can grab a large share of the application space, maybe we can wait another 20 years and see. But I'm saying that right now there doesn't seem to be any market movement supporting that hypothesis even though I think we're long overdue to see some if the hypothesis held, and so speculation on the future success of the approach is not justified by what we see in the market.

            My hypothesis for the preponderance of such speculation in forums such as HN is that commenters in such forums (as well as people writing OSes at Google) are not sufficiently aware of the software landscape overall (which is mostly growing in the "low skill" area, let's call it), of which they're making up a dropping proportion. There are 50 new Python developers for every new C++/Rust developer, and if you think you can convert a large portion of them then you haven't met them.

            That's why I think that low level languages that try to accommodate application programming are misguided if they think this will help convert a significant portion of application programmers in a market that's overall becoming less skilled, not more (I think this approach harms their adoption rather than helps it). I believe that the low-level/application convergence a lost cause, but even if it isn't, then Rust certainly isn't the language to have cracked this problem. I would say that the only hope for "high skill" languages to increase their share is if the "low skill" segment is reduced by AI, which may well happen, but that's a very speculative bet.

    • gpderetta 11 days ago
      C++ managed to get market share from C by being both as low level and higher level. It is true that it hasn't happened again since, as other higher level languages took market share from C++ for applications, but didn't replace it for lower level stuff. Still is conceivable that another language might do to C++ what it did to C.
      • pron 11 days ago
        I think that the way you use "high level" here is vague and so makes it hard to see what's going on. I define a high-level language as one that trades off control over memory resources in exchange for higher abstraction (by relying on automatic memory management, AKA garbage collection -- be it based on a refcounting or a tracing algorithm -- as the primary means of memory management), while a low-level language makes the opposite tradeoff. By this definition, C++ is just as low-level as C. We can argue over which C++ features made it more attractive than C in the low-level space, but it is clear that the overall market share of C + C++ has only declined over the past thirty years, and C++ has failed to make significant inroads in application programming over the long term (it had a short-lived rise followed by a drop). The question I focus on is whether a low-level language, by my definition, should have features specifically accommodating application programming. The obvious failure of low-level languages -- which include C++ according to my definition -- to take a significant market share of application programming leads me to answer that question in the negative.

        Various features accommodating low-level programming -- those that may have helped C++ take market share away from C but, crucially, have not helped it gain market share in application programming (over the long term) are, therefore, irrelevant to this core question. It's one thing to make a low-level language more attractive to low-level programming, and a whole other thing to make it more attractive to application programming. C++ has succeeded in the former but failed in the latter.

        • gpderetta 11 days ago
          [Note I used "higher" as opposed to "high"]

          I would say that an higher level language provides more abstractions than a lower level one. Memory and resource management is an aspect of it, but not a requirement (C is an higher level language than asm, yet it doesn't provide any memory management other than implicit stack allocation). In any case yes, the definition is nebulous.

          I would add that at the turn of the millennium C++ had a clear domination of the application space.

          edit: I don't claim any specific knowledge, but, more than GC, I would say the lack of built-in networking and enterprise[1] features to integrate with web application stacks prevented C++ from taking an hold in the early Internet era (outside of large tech companies).

          In the last decade or so, I think the main reason it is being displaced in the desktop space simply by virtue of not being JS.

          [1] I'm not using this disparagingly

          • pron 11 days ago
            > I would add that at the turn of the millennium C++ had a clear domination of the application space.

            Correct, which it has since lost. My theory is that it's been due to two reasons: improvement in the performance of high-level languages due to both software and hardware advances and due to it taking several years for people to observe the cost of a low-level language, which mostly manifests not in the initial writing but in maintenance (because low abstraction means that changes to components require less local code changes). It's not much more costly to write a program in C++ than it is in Java or Python, but it is much more costly to maintain over time.

            > In the last decade or so, I think the main reason it is being displaced in the desktop space simply by virtue of not being JS.

            It's also been displaced in the server space. But suppose the lack of built-in "enterprise" features is the cause. It's been a decade since Rust, the "modern C++", has appeared and it, too, has failed to make significant inroads into the application space (i.e. take a significant market share from what I call high level languages).

            I'm not saying people should stop looking for some holy grail of low-level/application convergence, only that I would (currently) bet against it.

            • gpderetta 11 days ago
              Most likely your experience is more relevant than mine, but regarding performance, while it is true that C# and Java are fast enough that for many, if not most, applications are indistinguishable form C++, the raise and raise of JS and python (languages with questionable and downright terrible performance respectively) in the last decade make me think that implementation performance has not been a significant factor. Hardware advances might be relevant though.

              Re maintenance, I'm not sure that a C++ program is more costly to maintain. I suspect that C++ is over-represented in multimillion line codebases.

              In any case, I think you might be right, it certainly is possible that an optimal "full stack" language (if you allow me the term) just doesn't exist. I would settle for a control-plane language that is easily embeddable and can be easily embedded in a native C or C++ (or rust, or whatever) application.

              • pron 11 days ago
                > in the last decade make me think that implementation performance has not been a significant factor. Hardware advances might be relevant though.

                I think both have been significant, but I agree that top performance is not so much an issue for many applications (although JS is also very fast).

                > I suspect that C++ is over-represented in multimillion line codebases.

                You're right, but the question is compared to what? Java, C++ and C are pretty much the only languages used in multi-MLOC codebases maintained over many years (there are others, but not nearly to the same degree as those three) so they're all over-represented, but the amount of C++ and C code is significantly lower than that of Java code. I think it's due to maintenance costs (as a maintainer of a C++ codebases over my career, that's been my experience, but I try not to extrapolate so I consider this my unproven hypothesis).

  • taosx 11 days ago
    I don't know why but Rust's syntax just nails it for me. The more I use it the more I appreciate it. I see many projects that diverge from Rust's syntax while being inspired by it. Why ?
  • keyle 11 days ago
    Related, I really like the look of hare[1], sadly they don't seem to be interested in a cross-platform compiler. As I understand it, some of the design decisions have basically led it to be mostly a linux/bsd language.

    I personally love C. I think designing a language top-down is a poor approach overall, I prefer the bottom-up approach of the C-inspired for system languages, that aim to fix C rather than this is how the world should beeee!

    [1] https://harelang.org/

  • samuell 11 days ago
    The discussion of grouped lifetimes reminds me of the principles of Flow-based programming (without the visual part), where one main idea is that only one process owns a data packet (IP) at a time.

    My own experience coding in this style [1] has been extremely reassuring.

    You can generally really safely consider only the context of each process at a time, since there aren't even any function calls between processes, only data sharing.

    This meant for example that I could port a PHP application that I had been coding on for years, fighting bugs all over, into a flow-based Go application in two weeks, with a perfectly development time perfectly linear to the number of processes. I just coded each processes in the pipeline one by one, tested them and continued with the next. There were never any surprises as the application grew, as the interactions between the processes are just simple data sharing which can't really cause that much trouble.

    This is of course a radically different way of thinking and designing programs, but it really holds some enormous benefits.

    https://github.com/rdfio/rdf2smw/blob/master/main.go#L58-L15...

  • pjc50 11 days ago
    This seems to be an "arena" or "pool" allocation approach. Conceptually quite a mature technique, but this adds the benefit of statically checking against pool lifetime?

    Probably works quite well for systems programming, where things are either "live forever", "reallocate within some pool" (thread handles, file descriptors, etc), or "transient" (for the lifetime of a system call or similar).

  • osigurdson 11 days ago
    So many comparisons to Go and C# on this thread. While at some level of abstraction, all languages are the same, comparing GCed languages to non-GC languages doesn't make sense in my opinion. Rust would have never been made if the creators were fine with Java.
    • pjmlp 11 days ago
      The people that don't suffer from GC phobia are aware how to write Go and C# code that makes use of C like features, while taking advantage of their productivity in non critical code paths.

      Rust's ideal case are domains where allocation management is strictly controlled.

      • osigurdson 10 days ago
        >> Rust's ideal case are domains where allocation management is strictly controlled

        What domain's are those? Do you mean just embedded or perhaps Firefox? Please elucidate.

        • pjmlp 10 days ago
          Stuff that falls under high integrity computing certification where dynamic memory allocations is usually forbidden, GPGPU where memory primitives make quite hard to have automatic memory management as they are host controlled, specific kinds of drivers, or special VM flavours like eBPF.

          Everything else is a matter how much one wants to involve themselves into politics of the right way to do memory management, versus get to a compromise for more safer infrastructure.

  • zozbot234 11 days ago
    > Effectively, this would mean that a data structure, like a linked list, would have a pointer pointing to the head which has a lifetime, and then every node in the list you can reach from that head has the same lifetime.

    Right, isn't that what GhostCell and its variants (QCell) are all about? This would be great if it led to a more elegant and principled implementation of that pattern, that could also end up being fully supported in Rust itself.

  • anonymoushn 11 days ago
    This lifetimes thing is maybe not even a top 3 mistake Rust makes. I hope successor languages can have a metaprogramming system that is less dreadful than proc macros, the ability for users to write libraries that are generic over user-provided writers, readers, and allocators, and the ability to bubble up errors from functions that call fallible functions from 2 different libraries without writing your own huge struct definition every time.

    It may also be nice if constructs that don't cause memory accesses and only ever do the correct thing or crash on the target cpu (such as integer division or pshufb without an address operand on any intel chip ever) were not unsafe. Placing "Well, LLVM says this arithmetic operation is UB and we won't bother to fix it" and "What if one day there's an x64 chip that does something other than crash if it encounters instructions from ISA extensions it does not have?" into the same bucket as "playing with raw pointers" is a bit weird.

    • estebank 11 days ago
      Calling them mistakes is actively ignoring that there are design constraints and decisions that shape what the language looks and acts like. And that's without getting into "there are only so many hours in a day, a limited amount of people, so not everything is getting done at the same speed, if at all".

      It's the first time I hear your second point about intrinsics, for example, and it seems like something that could be done relatively straightforwardly, but introducing "inconsistency" for the benefit of skipping a single unsafe block sounds like the kind of RFC thread I wouldn't want to be a part of.

      • anonymoushn 11 days ago
        I don't really think the ownership-related concerns around thread-local storage that ruled out stackful coroutines are valid. Stacks and threads are orthogonal in the same way that futures and threads are orthogonal in the existing design for async. This design has caused a proliferation of dozens of fundamentally non-composable libraries addressing barely-different use cases and leaving many use cases unaddressed, likely wasting much more time than it would have taken to get this right. Various prior art has existed for decades.

        I accept that languages that are designed collaboratively by huge numbers of people through an RFC process are basically impossible to improve.

    • cryptonector 11 days ago
      > and the ability to bubble up errors from functions that call fallible functions from 2 different libraries without writing your own huge struct definition every time.

      Please, no exceptions. No exceptions.

      There was a language featured on HN recently that did a good job of making it trivial to build sum types for results/errors from disparate other sum types for results/errors. It can be made easy while retaining the idea of using a sum type for results.

      • nmsmith 2 days ago
        Do you remember what language that was? Or at least, how I'd find it? I'd be interested in checking it out.
    • mst 11 days ago
      I think the UB thing is at least arguable in some cases, certainly.

      But I think the ISA extension thing is absolutely a good idea for the simple reason that if that ever changed you would be -screwed- and forcing people to Think Really Hard about their usage if ISA extensions is probably a net win anyway.

      Possibly they should be in a second "bucket of scary stuff" with a different set of hazard stickers applied, but using unsafe for all of it makes sense from the POV of 'unsafe means I am voiding my warranty inside this block' rather than anything more specific.

    • conaclos 11 days ago
      proc_macros are so terrible and propagate attributes everywhere. It would be so much easier to write derive macros with reflection support. This would also avoid fragile code generation in some cases.
      • estebank 11 days ago
        Proc macros in Rust are objectively bad in a number of ways (limited to only dealing with tokens/no type system access, hard for newcomers to grasp and write, force a lot of attributes to be written to annotate specific items, etc.) but they exemplify "worse is better" perfectly. They are incredibly powerful and allow people to build amazing abstractions that are in some cases best in class. We want to have something better, but there isn't a pressing need to rush a replacement.
    • anonymousdang 11 days ago
      [flagged]
  • kkukshtel 11 days ago
    Seeing Mads Torgersen on the list of collaborators for this made me take this seemingly random blog post 100x more seriously.
  • netbioserror 11 days ago
    For use cases which aren't bare-metal embedded:

    1) Prefer the stack wherever possible.

    2) RC.

    3) Unique pointers by default unless explicitly noted otherwise.

    4) Immutable arguments and returns by default unless explicitly noted otherwise.

    There's a language that does this all already, and it's called Nim.

  • mightyham 11 days ago
    I'm not very well versed in Rust, but isn't it possible to implement this sort of checked arena allocation in rust using lifetimes? Something like slotmap (https://docs.rs/slotmap/latest/slotmap/) except all of the pointers/keys have their lifetime tied to the arena/pool/map.
  • bjourne 11 days ago
    How would the language guarantee no use-after-free scenarios?

        let x = new Foo()
        if (y == 1) {
            recycle x
        } 
        println(x)
    • flohofwoe 11 days ago
      I guess that's what the 'copy count' is for. After 'recycle x', the copy-count for the reference 'x' would be 0, so it's invalid to use.

      That would be trivial to do as a runtime check, don't know if the compiler is smart enough to catch such things at compile time.

      Also not sure if the idea of a 'copy count' is better than a 'generation counter' which also provides runtime protection against dangling access, e.g. see https://floooh.github.io/2018/06/17/handles-vs-pointers.html

  • aetherspawn 11 days ago
    Better systems programming is model based design and automatic code generation, period.

    It is the be-all and end-all that will make those scenes in Star Trek where they alter some core starship system programming in 2 minutes flat without mistakes actually plausible.

  • unaut 10 days ago
    Amm, is this some kind of reinvention of C and C++ but from high-end backwards?
  • cornholio 11 days ago
    Seems like the wrong problem to solve. "Systems programming" is hard, and should be hard, for reasons unrelated to the programming language used. Something like Rust which forces you to constantly reevaluate your design before you even press the compile button, is ideal.

    What's really lacking are safe, easy, strong typed general purpose languages that can leverage AOT compilation for high performance and static analysis etc. A language with the learning curve of Python, near C performance and strong safety guarantees of Rust. There is nothing that suggests to me such a language would not be possible. Swift and C# come closest but they are warped by their respective corporate overlords.

    • gumby 11 days ago
      > What's really lacking are safe, easy, strong typed general purpose languages that can leverage AOT compilation for high performance and static analysis etc.

      The funny thing is that MACLISP could do this back in the early 70s. While Lisp is a dynamically typed, GCed language, MACLISP let you annotate variables to promise that they would only be called on certain primitive types, thus supporting compile-time optimizations for numeric code and not necessarily allocate memory, follow a pointer, unbox data etc. This was critical to make packages like MACSYMA run on machines with performance in the KIPS range.

      Although languages like Python support similar declarations or annotations, they aren't used in the same way -- if you want performance you write the module in C++ and give it a python interface.

      • bsder 11 days ago
        > While Lisp is a dynamically typed, GCed language, MACLISP let you annotate variables to promise that they would only be called on certain primitive types, thus supporting compile-time optimizations for numeric code and not necessarily allocate memory, follow a pointer, unbox data etc.

        And woe betide you if you happened to violate the preconditions of one of those annotations. That kind of programming is no better than C with "Segmentation fault. Core dumped."

        The problem is that "system programming" or "embedded programming" isn't one thing.

        There are people who want to throw around a zillion polygons at 144FPS. There are people who want max performance on x86-64 shoveling around millions of network connections. There are people who need guaranteed real time response or someone dies. There are people who need to stay asleep all the time to conserve battery and only wake up once every 5 minutes to chirp some RF.

        These people do not have the same needs in a language yet they almost all wind up shoveled into C.

        • ModernMech 11 days ago
          > These people do not have the same needs in a language yet they almost all wind up shoveled into C.

          Why is that tho? There are better languages for each of those applications. What do you think, is it business requirements? Hardware? Regulation? Legacy code?

          • bluGill 11 days ago
            Are there better languages? What does it even mean to be better?

            C is a great choice because I can hire people who know C and get good results. A domain specific language will not be much better than C for any of the above. It turns out that all the hard problems listed above are similar enough that someone good in one will be good in them all and so I can hire C experts from any field and get going fast without having to teach them my domain specific language that isn't really much better.

            • ModernMech 11 days ago
              So does that mean you feel programming language design peaked in 1972?
              • bluGill 11 days ago
                No. However I think most "modern" languages make a big mistake because they don't address interoperability. Rust is great if you only write in rust, but if you have millions of lines of C++ it is really hard to fix a simple one line of code bug in rust. Rust cannot talk C++ - it can at least talk C but you lose the advantages of rust. Now imagine you have C++, rust, go, and ada teams fighting about which is best - they have to drop down to 1972 C to work with each other. (note that I carefully excluded D since D does have some C++ interoperability)

                I do not know the best way for any modern language to take to any other modern language - but 1972 C is a big negative to them all.

                • pjmlp 11 days ago
                  Only on Linux and regular UNIX systems.

                  On other systems we have XPC, COM, AIDL, TIMI, MSIL.

                  • bluGill 10 days ago
                    Trade offs - I'm not familiar with all of those, but the ones I do know of are trading something for that flexibility - generally performance and power consumption - two things that we should not be willing to trade off. You can get similar things from UNIX as well, though it isn't the default and so you need extra effort to use them.

                    That said I fully accept your correction. (even though it doesn't apply in my world)

        • gumby 11 days ago
          > And woe betide you if you happened to violate the preconditions of one of those annotations.

          Umm, this is MACLISP we're talking about -- you know the types at runtime and can pass the unspecialized argument to an ordinary function. A kind of generic function I suppose, though we didn't use that vocabulary back then.

          Just because this was 50 years ago doesn't mean that programmers were morons.

          • bsder 10 days ago
            The point of annotations is to avoid having to branch on type at runtime.

            Consequently, whatever you sent in was assumed to be what the annotation was. There was no "compilation phase" to ensure that the type at the call point actually matched the type in the annotation. If those differed, then your program would just go boom.

            Now, it could be that I'm remembering something other than MACLISP. It's been a lot of years.

            • gumby 10 days ago
              MACLISP was most definitely a compiled language, it’s just that like modern Lisps you could mix the two (compiled and interpreted code).

              I tried to be clear by using the modern terminology “generic function”.

              If you compile a function with those annotations you’d end up with two functions: one that could be called if you knew the types at compile time and one that did not.

              • bsder 10 days ago
                Apparently I am confusing LISPs. My fault.
      • pjmlp 11 days ago
        Somehow it is kind of tragedy of this industry, the amount of zig-zag it does between ideas and technologies, until mainstream finally adopts concepts that were already available during its genesis to a iluminated few.
    • K0nserv 11 days ago
      > Seems like the wrong problem to solve. "Systems programming" is hard, and should be hard, for reasons unrelated to the programming language used. Something like Rust which forces you to constantly reevaluate your design before you even press the compile button, is ideal.

      I was recently musing about this on Mastodon[0], commenting on a tweet that said something to the effect of "C++ isn't complex, it just doesn't lie about the complexity of reality". I think Rust is similar, and really any system language has too be. However, there's a space between lying about reality and fully reflecting reality where useful abstractions, such as Rust's lifetimes live. If you hardline the "lying about reality" framing, any abstractions on top of machine code must be ruled out. So while I see where you are coming from, I think lots of space to improve system programming, as Rust did.

      > What's really lacking are safe, easy, strong typed general purpose languages that can leverage AOT compilation for high performance and static analysis etc. A language with the learning curve of Python, near C performance and strong safety guarantees of Rust. There is nothing that suggests to me such a language would not be possible. Swift and C# come closest but they are warped by their respective corporate overlords.

      I agree about Swift fitting this mold, it's unfortunate that it has failed to gain traction outside of Apple's ecosystem.

      0: https://infosec.exchange/@k0nserv/112399102914339131

      • UncleMeat 11 days ago
        C++ has a bazillion different ways of initializing a value, most of which contain syntactic footguns of some kind that don't do what you probably intended. That's not fundamental complexity of reality, that's messy evolution over decades coupled with the inability to break backwards compatibility.

        C++ also does "lie" about reality in a lot of ways, in the sense that the actual compiled program behaves quite a bit differently than a plain reading of the code implies to a nonexpert. The most obvious form of this is the rules for effect ordering. Even the recurring complaints about the compiler doing unexpected things based on the as-if rule and undefined behavior seem to count here.

        • pjmlp 11 days ago
          Actually, it has enough for having a 200+ pages book full of them.

          "C++ Initialization Story: A Guide Through All Initialization Options and Related C++ Areas"

          https://www.amazon.com/Initialization-Story-Through-Options-...

        • bluGill 11 days ago
          The reality is people have 30 year old code bases, and so a messy evolution without breaking backward compatibility is and important part of reality. C++ could have been a much better language without backwards compatibility - but nobody would used it and so what is the point.
          • UncleMeat 11 days ago
            I don't think that C++'s rigorous adherence to backwards compatibility (include binary compatibility) is a bad choice. It has served the language well. It just has consequences.
        • ModernMech 11 days ago
          C++ is the poster child for incidental complexity.
      • pjc50 11 days ago
        > it just doesn't lie about the complexity of reality

        "Undefined behavior" is not reality; it's a strange shadow world where anything may happen. The compiler reserves the right to reject your reality and substitute its own - which it will not show you within the language, nor let you choose or hint, nor warn you.

        People like to use C and C++ as if they were a species of typesafe macro assembler that always produces predictable machine code, and that breaks down often enough that the world is full of systems-level CVEs.

        • bluGill 11 days ago
          People keep saying this, yet undefined behavior sanitizer proves in the real world it isn't an issue most of the time.

          Most real world security issues have nothing to do with code. Social engineering breaks more security than memory issues. We are also skewed by many of the fundamental programs are written in C and so there is a lot of value in finding holes.

      • gjm11 11 days ago
        It doesn't feel to me like much of C++'s complexity is just "the complexity of reality" being faithfully exposed.

        I'd wonder whether that's just me being dim, but if so then I think Bjarne Stroustrup and Herb Sutter are dim in the same way. (Stroustrup, quoted approvingly by Sutter: "Inside C++, there is a much smaller and cleaner language struggling to get out." and "Say 10% of the size of C++ in definition and similar in front-end compiler size. ... most of the simplification would come from generalization.") And it doesn't seem likely to me that they are ignorantly overpessimistic about the complexity of C++.

      • neonsunset 11 days ago
        .NET of today seems to stand out quite a bit in comparison to many other MSFT-backed projects. Especially if you look not at non-essential and possibly azure-related (a tiny part of ecosystem) tooling but rather at the "core" itself (that is, SDK, runtime and compilers) which is distributed under permissive MIT license which you can just take and do anything with.

        To surprise of many outside of .NET community, it has much better performance than Swift[0][1][2] and lets you match performance of C and C++ in areas that matter.

        [0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

        [1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

        [2] https://benchmarksgame-team.pages.debian.net/benchmarksgame/... (better performance for short-running benchmarks sensitive to JIT)

        [bonus] C# AOT vs Go https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

        (no direct comparison but Tl;Dr: ARC and Swift defaulting to virtual dispatch that can't always be elided with WMO results in Swift having much lower (even if more consistent) performance)

        • pjc50 11 days ago
          With AOT and "[UnmanagedCallersOnly]" you can even emit C# libraries that are callable from true native code using the C FFI.
          • neonsunset 11 days ago
            Yup, you can even statically link them into C/C++/Rust/etc. (or vice versa)!
        • igouy 11 days ago
          • neonsunset 10 days ago
            Thanks!

            Also wow, I stand corrected, Swift is a much stronger contender in Benchmarks Game scenarios than earlier microbenchmarks and research projects that I've seen would suggest. Particularly when using its own SIMD abstraction, it is likely LLVM makes very competent transformations, possibly merging narrower operations into a wider ones, replacing one set of intrinsics with another, etc.

            However, still seems to suffer quite a bit, just like Go, in Binary Trees (it's not like C# does the best there either, the winners use bump allocators, and then there's also a 2.6s magic entry by Java, write barrier elision?).

    • brazzy 11 days ago
      > "Systems programming" is hard, and should be hard, for reasons unrelated to the programming language used.

      The question is: is it harder than it needs to be due to accidental complexity[1] in programming language design. TFA argues that there is still some accidental complexity in the design of Rust that leaves room for an "easier" language that still allows the level of control needed for a systems language.

      [1] https://medium.com/@sharkaroo/navigating-complexity-in-softw...

      • dist1ll 11 days ago
        There's definitely a bunch of accidental complexity in systems programming. Even in domains with extreme performance constraints, large portion of the code base could be expressed with simpler and safer semantics like MVS.
    • armchairhacker 11 days ago
      And Kotlin, and arguably Java and Scala, and possibly Dlang and Nim. There are a lot of strongly-typed general-purpose languages.
      • dartos 11 days ago
        Yeah the JVM is quite fast at this point after it starts up.
        • pjmlp 11 days ago
          It can even be helped to be faster with AOT/PGO, or JIT caches.

          And while Android ART technically isn't Java (the plaform), its mixed execution mode with hand written Assembly interpreter, JIT, AOT with PGO shared via play store is quite cool.

    • angra_mainyu 11 days ago
      >A language with the learning curve of Python, near C performance and strong safety guarantees of Rust.

      Sounds like Go or Zig, though Go is slightly less safe and slower - in turn being much easier.

      Though I'd reach for Go first if I'm working with anything concurrent. Concurrency is hard, and Go really helps with getting it right.

      • kaba0 11 days ago
        Go is not at all playing in the same league as Zig or Rust. Of course managed languages are more than fine for almost every use case, but I really hate this trends to wash go together with low-level languages for absolutely no good reason.
        • mwcz 11 days ago
          That's not just a trend. There are many areas of programming where a choice between Rust and Go is perfectly reasonable. Rust is playing on Go's turf, but Go isn't playing on Rust's, if you get my meaning.
          • pjmlp 11 days ago
            F-Secure created USB Armory firmware in Go instead of Rust, exactly because they wanted to make the point of it being a possible scenario for Go, regardless of what the interwebs think about the idea.

            USB Armory is still being sold.

      • gliptic 11 days ago
        Zig doesn't have anything like the safety guarantees of Rust.
    • _kb 11 days ago
      Crystal lang also plays a bit into that space. It has a fairly expressive type system, good inference, checked nil type, and very approachable syntax.
    • kkukshtel 11 days ago
      C# nails this consistently, it's not worth hand waving it away because it isn't FOSS. Rust drama itself suggests that a FOSS language doesn't shield it from corporate meddling.
      • raddan 11 days ago
        The C# compiler has an MIT license and is available on GitHub, which is about as FOSS as it gets.

        https://github.com/dotnet/roslyn

        • crote 11 days ago
          The compiler is one thing, the ecosystem is another.

          Sure, you can compile and host your .NET Core web app on Linux these days, but desktop applications are a completely different beast. You're often lucky if one exists at all, and if it does there's a decent chance it's not designed for "modern" C#-on-Linux. It really isn't a viable option yet, maybe in a decade or two.

          • stanac 11 days ago
            WPF is MIT licensed, available on github. The fact that it's windows only doesn't make it less FOSS. Microsoft doesn't have incentive to port it to other platforms but other people and companies are free to do that.
            • MarkSweep 11 days ago
              WPF is not the best example of open source, as some components are still closed source. Though it only runs on Windows, a closed source operating system, so perhaps that is not so important.

              https://github.com/dotnet/wpf/issues/2554

              That said, there are cross platform, open source .NET UI frameworks out there, including one that is inspired by WPF:

              https://avaloniaui.net/

      • dartos 11 days ago
        The issue for me with languages like C# is the lack of expressiveness.

        Sometimes classes aren’t the abstraction I want to use.

        That’s why I like kotlin over Java or c#. It’s be nice to have a language like that which targets .net

        • estebarb 11 days ago
          You should try Java and C# again. They have added lambdas and the support for high order functions is quite good. Sure, there is missing some things like guaranteed tail call optimization.
          • neonsunset 11 days ago
            • pjmlp 11 days ago
              Actually they were made for other languages given the original goal of Common Language Runtime, as published on the 2001 release note.

              https://news.microsoft.com/2001/10/22/massive-industry-and-d...

              Anyone with the pre-release MSDN .NET SDK that was initially made available to MSFT Partners, with alpha documentation written in red, has a folder with tons of functional languages that could take advantage of tail calls.

              F# only came later into the picture, even considering its OCaml for .NET origin.

            • SirGiggles 11 days ago
              Not the person you replied to but, today I learned!

              As an aside, and as someone who's currently going through materials such as Crafting Interpreters (and who knows what else based on suggestions from r/compilers), is there a, I guess, guide for people who want to implement a compiler that targets the CLR/CLI?

              I have a copy of CLR via C# 4th edition, but other than that not sure what else I can reference targeting anything newer than .NET Framework 4

          • dartos 11 days ago
            I have tried C# recently.

            They have support for these features, but they require some ceremony to use (like wrapping function arguments in a Callable), which increases friction, signal to noise ratio, and decreases the expressiveness that I want.

            I just want to write what I want without having to think too much about how the language wants me to write it. Go is good for that. JS too, but has a lot of historical cruft.

            • kkukshtel 10 days ago
              I feel like your "recent" must have still been years ago. Modern C# is as terse as Python/TS/Js for basically any equivalent task, if not more so. Dotnet 8 even make the following possible:

              var foo = [1,2,3];

              Add on switch expressions, primary ctors, first class functions, no ceremony in Program.cs anymore... it's hard to get more terse and expressive unless you want lots of chained single symbol operators (look at F#).

              • neonsunset 10 days ago
                (slight correction: `var foo = [1, 2, 3];` requires natural types for collection literals (it needs to infer the best type for var), this is coming in C# 13 (.NET 9, this November), but otherwise `int[] numbers = [1, 2, 3];` works)
                • kkukshtel 10 days ago
                  You're right! I remember some session where they talked about wanting to infer a natural type for stuff like that but its obviously a big question. But as you said declaring the type directly allows it to work. However the other nice part is that you _can_ do this:

                  void myFunc(List<int>){}

                  myFunc([1,2,3]);

                  The expression sytanx can infer the type from the caller syntax so you can just inline the array declaration.

              • dartos 10 days ago
                My recent was a month ago, but to be fair, the CTO of that company (who wrote the existing code when I joined) last wrote production code in 2008, so his code may have not been using those newer syntax features.
                • kkukshtel 10 days ago
                  Yeah it's very easy to write Java style C# that is incredibly verbose and rigid, but once you understand modern c# you can see how it's this really great (imo, awesome) hybrid of functional/typed languages. They've done a ton of syntax work to let types get out of your way when you need them to, with the ability to always fallback to rigidly declaring them when you need better readability.

                  The downside of all this is that the only people that know how awesome C# is are people who are already doing C#. Growing the pot and trying to convince people to give it a go in its modern (post dotnet 5/"core") iteration is like pulling teeth. Everyone assumes nothing has changed since like C# 2, and given the fact it's a "boring" language that doesn't have Hacker Hype behind it, people just ignore it.

                  Every week you have people on HN wishing for something that does exactly what C# does but don't want to give it a shot or admit that C# is _actually_ an incredible language and toolset that does exactly what they want (and more).

        • neonsunset 11 days ago
          C# offers multiple options for abstracting away functionality. What do you have in mind?

          Also, JVM ecosystem does not offer the ability to provide zero-cost abstractions, which C# does with monomorphized generics. This is a hard requirement for productive systems programming if you don't want the C level of verbosity.

          • dartos 11 days ago
            > Also, JVM ecosystem does not offer the ability to provide zero-cost abstractions

            This is true, wasn’t thinking about that.

            C# always requires me to use and organize my code into classes and my data into objects.

            For example, If I wanted to use higher order functions, I need to wrap functions in callables (Objects).

            If I wanted to throw together a quick script to test, I need to set up a program which uses some magic configurations so that I don’t need that entry point class.

            Granted I only worked with C# professionally for a couple of years before getting back to a Go shop, but it always felt like trying to avoid thinking in objects was fighting against the language.

            Personally, I like just having first class functions which take simple data structures as arguments. Modeling the world as objects just isn’t as clear to me (the old OOP vs FP discussions)

            Languages like C# and Java turn me off for that reason.

            • neonsunset 11 days ago
              Are you sure you are talking about C# and not some other language? There is no such thing in C# as "callables". It has lambdas, Funcs and delegates.

              It has always had a fair share of FP features and only gained more as it kept evolving. It had higher order functions partially in the form of method group to delegates conversions since C# 2.0, released 19 years ago and to a full extent in the form of lambda expressions since C# 3.0, which also included LINQ, released 17 years ago. It is a mixed-paradigm language.

              On the off chance that this is trolling, I must point out that Java and C# are sufficiently distinct, with the latter leaning more heavily on offering both lower and higher level features like extensive pattern matching, LINQ, structs and pointers/byrefs.

              If you do have C# snippets you are unhappy with - please post them. There is likely a better way.

              (I noticed you posted about Callables twice, and had a quick search - it appears to be a Godot-specific abstraction imposed on C# due to inadequacy of GDScript it has to interoperate with, and has nothing to do with C# itself)

              • dartos 10 days ago
                Callable may have been the wrong word. I think I meant delegate. (I do work with Godot, but not in C#. Some wires must’ve gotten crossed)

                Regardless of the actual word, there is an extra thing I need to do to pass a function to a higher order function as an argument.

                I was using this as an example to demonstrate how C#’s support for the language features I like require some ceremony which reduce expressiveness and require me to think about the language more.

                • neonsunset 10 days ago
                  You didn’t. The example does not exist and the terms you used don’t apply to C#.

                  Post an actual example (code).

              • pjmlp 11 days ago
                Unity and Godot are both a bless and a sin for C#'s adoption.

                In one way, they help the adoption in the games industry, in the other hand they introduce so many anti-patterns with their reflection based SDKs, and use of magical methods instead of proper .NET code.

            • prmph 11 days ago
              Exactly.

              I used C# for a long time, and when I started using JS, I marveled at the directness of its expressiveness, its lack of ceremony.

              What's actually needed is a language as direct as JS, with the bad parts stripped out, types and a proper standard lib added, and compiled to byte-code for performance.

              They can call it a different name if they want.

          • pjmlp 11 days ago
            Valhala might help there, but yeah, we are still a couple of years away and currently the best is to manually model memory with the new Panama API, definitly not the same league as C# / .NET today, which I consider the closest modern language to what Modula-3 has promised us.
            • fuzztester 11 days ago
              In what ways is C# the closest to Modula-3's promises?
              • pjmlp 11 days ago
                Now that AOT is part of the standard toolchain, with plenty of Midori, and C++/CLI capabilities also exposed to C# Language level, a memory safe systems language with a modern type system.

                I would also place D and Swift into that bucket, only D never managed to really take off when it had the opportunity, and Swift is clearly at home in iDevices.

      • galangalalgol 11 days ago
        Those are good points, but my problem with c# is that it isn't as performant in Linux. On Linux I'd suggest go as the more performant option. In any case I like both of those so much more than swift. Swift is slower even with bounds checks off. You can't turn them off in go.
        • aljgz 11 days ago
          > but my problem with c# is that it isn't as performant in Linux

          Any evidence for this? From the benchmark game it seems like c# is more performant than go, and measurements are done on Linux.

          https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

          • galangalalgol 11 days ago
            I think it is because I generally look at a given benchmark there and don't browse below the "optimized" section to look at solutions that use intrinsics directly. Being able to use intrinsics directly is a great feature, but doesn't fit in with the whole "fast but not so fast I need to worry about safety" theme. When you exclude solutions that use intrinsics, go is faster. I use rust as my daily driver at work and home. I don't think I'd select either go or c# for a project. When I just want something that works, and I'm not worried about performance, rust is easy to use. It is only when I'm trying to squeeze out performance it gets hard. It is hard to convince go that it is safe to skip bounds checks, and while writing intrinsics isn't hard, it isn't very portable and you have to write lots of versions unless you metaprogram it, which I find harder than idiomatic code in any of these languages. Squeezing out performance is easy in c++, even in c++17, but it usually entails doing something that will "probably always be ok". Doing stuff like that repeatedly adds up to frequent issues.
          • Thaxll 11 days ago
            Those benchmarks were debunked, they don't represent anything.
    • farresito 11 days ago
      I think that's what they are trying to achieve with Mojo.
      • sa-code 11 days ago
        Agreed, although the language still feels like a tech demo. They don't even have a package manager yet. I'm curious and hopeful to see what it looks like in a year
        • mirekrusin 11 days ago
          They don't yet have `class` support.
          • sa-code 11 days ago
            They have their own version called struct which works in a very similar way. I feel like this should be mentioned when saying that `class` support didn't yet exist
            • mirekrusin 11 days ago
              Yes, you're right and struct is _the_ interesting part about mojo and is enough to write interesting things in it.

              Floodgates will open though once class is also supported.

      • fuzztester 11 days ago
        Posted this Chris Lattner interview about Mojo to HN yesterday. The video is just 6 days old. It's on the Developer Voices channel.

        https://news.ycombinator.com/item?id=40285414

      • almostgotcaught 11 days ago
        [flagged]
    • knighthack 11 days ago
      I really must suggest Nim.
    • Lutger 11 days ago
      The D programming language is mostly this, or could be this, however it has several flaws. It still isn't as polished and doesn't have the ecosystem, and is sort of trying to be Rust, C# and Python at the same time which doesn't always work out that well. But when it does, it is amazing. It also has some crazy and unusually powerful metaprogramming abilities.
    • ddorian43 11 days ago
      Isn't Java better than Swift/C# ? (more open, green threads, better ecosystem).
      • kryptiskt 11 days ago
        C# has value types, stackalloc, simpler FFI and a rich SIMD API, offering far better control for low-level code than Java. I don't feel green threads is a system level concern, and while Java's ecosystem is huge, it's also very enterprisey. Also, I'd disagree with more open, C# is under a more permissive license and Oracle has actually been litigious over Java.
        • pjmlp 11 days ago
          While I also favour C# over Java, the languages, the runtimes it is a bit different, given the wide spectrum of JVM implementations in the wild.

          Lets not forget that C# only exists because Sun won the lawsuit over J++, where several C# features were born (J/Direct => P/Invoke, events, JFC => Windows Forms, COM support), and Ext-VOS (the COM next generation runtime) paper refers to J++, before COOL became C#, and J# was introduced as migration path from J++.

          Ironically, after all these years, as means to keep Azure relevant for the Java community, Microsoft has become a OpenJDK contributor by acquiring jClarity, including ARM support on Windows, better escape analysis, and has their own distribution.

          And VSCode plugins for Java, developed in collaboration with Red-Hat, are much better than C# Dev Kit, without requiring a Microsoft account.

          • neonsunset 11 days ago
            The only features DevKit offers are Visual Studio style solution and test explorers. You don't need either DevKit or a Microsoft account to use base C# extension.

            The description is confusing though. I suppose they did the whole "rebrand" thing because for years everyone thought poorly of experience of writing C# in VSC while it has always been semi-decent (depending on your skills) and got better recently, especially after moving to Roslyn-based LSP from Omnisharp.

      • pjc50 11 days ago
        Debateable; I much prefer the dotnet ecosystem, even if it's less open. I suspect green threads vs C# async is another matter of taste as well. Plus Oracle is a much more annoying overlord for Java.
      • alternatex 11 days ago
        Someone prefers Java over Swift/C#? Now I've seen everything.
      • naasking 11 days ago
        How is Java more open than C#?
        • pjmlp 11 days ago
          Its features are driven by a consortium of Java companies, even if Oracle does the majority of the development.

          This allows for a spectrum of JVM implementations, where basically any CPU capable of running some form of Java implementation has a JDK available.

          https://en.wikipedia.org/wiki/List_of_Java_virtual_machines

          Likewise, most frameworks are driven by set of companies, based on industry standards, where .NET due to its culture, most Microsoft shops tend to only adopt what comes from Microsoft.

          It is a long discussion problem in the .NET community that when many FOSS projects finally manage to get enterprise adoption, Microsoft ends up coming with their own solution, which said enterprises then switch to, thus killing those FOSS efforts.

          Listen to big podcasts like .NET Rocks, where this tends to be often discussed with their guests.

          • naasking 11 days ago
            I don't see how any of that means that C# is less open. Microsoft drives C# feature evolution from community feedback, so is that less open than a consortium of commercial interests?

            Does Microsoft somehow prevent anyone from developing a CLR implementation for CPUs they don't support? The only reason the JVM is everywhere is because they had first-mover advantage.

            How does Microsoft developing their own solution to a problem "kill" FOSS efforts, exactly? Microsoft's solutions are almost all also FOSS. Is competition some kind of inherent problem? Or is a large corporation creating and maintaining FOSS a problem? Why aren't you annoyed at those enterprises that started adopting the original FOSS projects, but didn't contribute dev time to maintaining them? Instead you're annoyed at Microsoft for creating its own FOSS and paying to maintain it for the community for free.

            I just find most objections to .NET so bizarre, it's almost always knee-jerk Microsoft hate.

            • taylodl 11 days ago
              It's been 25 years since the release of C# and 30 since the release of Java. The fact C# isn't everywhere Java is has nothing to do with "first-mover advantage" as you call it but is due to the fact that C# is open-source software, which is not the same as Free Software. Java, even Oracle's Java, is controlled by the community; .NET is controlled by Microsoft. That's the difference.
              • naasking 11 days ago
                C# and VB compiler suite, MIT license: https://github.com/dotnet/roslyn

                .NET virtual machine, MIT license: https://github.com/dotnet/runtime

                SDK, MIT license: https://github.com/dotnet/sdk

                You were saying?

                • taylodl 11 days ago
                  It's Microsoft-driven, not community-driven. For the past 25 years the Microsoft camp has either never understood, or hasn't cared about, that distinction.
                  • naasking 11 days ago
                    Each of those links shows thousands of contributors not connected to Microsoft, but do go on believing there's no community, or that it's a terrible thing that people rely on Microsoft to produce a free, open source, solid product they can depend on. That's the reason it's Microsoft-driven, because they've mostly been doing just fine as a steward, and it being MIT licensed means MS knows the community can take over if they step too far out of line.
                    • tcmart14 11 days ago
                      I think this may be a better way to explain it. Supposed someone comes up with a C# proposal that defines a way to introduce the Result type enum system that Rust has as a replacement for Exceptions. Supposed this is a pretty solid plan, easy transition, we can make a magic button that all the code bases when upgrading to the next version of C# just has to press the magic button and its done. No more expcetions, you now have Result return types. Lets also supposed this proposal has a large amount of the community on board with it. At the end of the day, that proposal will live or die by someone (or some group) at Microsoft.

                      Now granted, I don't know if it would be different with Java and Oracle. But that is the point. A significant enough change proposal for C#, supposed like above, a change proposal that goes against where Microsoft thinks the development path of C# should be, that is the important bit. So long as contributions and feature request align with Microsoft's interest, its all good, but if the community has large support for a contribution or change request that doesn't align, then it may be different.

                      Addition: Although arguably, this is true for most languages. Even FOSS languages have some type of leadership that those big shifting proposals will live and die with. Not intending to cast anyone in a bad light, but just a project leader and a project I am familiar with. If a proposal that was as ground shifting as the proposed above was suggested for Zig, whether or not the project does it, the decision of whether to do it may live or die depending on Andrew Kelly or his close knit team of contributors to Zig. Same could be said about Rust, the project has leadership and they could kill a popular proposal. Even outside of languages. The linux kernel, a feature or restructure may live or die by the hand of Linus Torvalds.

                      • naasking 11 days ago
                        > So long as contributions and feature request align with Microsoft's interest, its all good, but if the community has large support for a contribution or change request that doesn't align, then it may be different.

                        Speculation about what Microsoft may do or may not do in response to community pressure is a weird kind of criticism. I can invent all sorts of fictional boogeymen too. What if Linus has a psychotic break and wants to make the Linux syscall interface mimic the Windows NT kernel?

                        People would do exactly what I'm suggesting here: fork and continue development without him.

                        As you also say, leadership means leading, and Microsoft takes backwards compatibility very seriously, and that's why people trust them and use their offerings, sometimes instead of open source offerings.

                        A change like that would invalidate every tutorial and document ever written about C# over 25 years, so is it really C# anymore? I don't think anyone would make that change. It's simple to fork the MIT licensed C# compiler, call it C* and make that transition in a language that is definitely no longer C#. I just don't think this is a good example.

                        For good examples, see the language evolution that has actually taken place which has been community driven in many cases. Records, tuples, the evolution of type constraints have all been started from community feedback, and are backwards compatible.

                        As a final note, a comment on "aligning with Microsoft's interests". They are interested in two things for .NET: selling Azure hosting, and incentivizing Windows programming. Making the software development experience pleasant, robust, predictable and backwards compatible are the objectives, so devs are incentivized to pay them to program on their OS, target your program to their OS, or run your software on their cloud infra. I really don't see the problem.

                      • neonsunset 11 days ago
                        Not exactly related to conversation but given that you mentioned (discriminated) unions, here are notes on recent design work:

                        https://github.com/dotnet/csharplang/blob/main/meetings/2024...

                        https://github.com/dotnet/csharplang/blob/main/meetings/2024...

                        • tcmart14 11 days ago
                          Would be really nice to have. I hope we can get it with something like optionals you see in Rust or Swift. And really, with optionals, something like Swift's guard let statements.
                      • taylodl 11 days ago
                        Java still has the Java Community Process. Oracle has a thumb on the scale, but it's still a largely community-driven process. The .NET community has nothing even approaching this level of community and openness. Microsoft is in the driver's seat.
                    • taylodl 11 days ago
                      Not the point. Who's leading? That's the point. Not impressed with your example of a corporate behemoth using free labor. That's the wrong kind of Free.
                      • naasking 11 days ago
                        > Not the point. Who's leading? That's the point.

                        No, the point is, who cares as long as everyone is happy with the direction?

                        • taylodl 11 days ago
                          Not quite the same thing though, is it? And if you're not happy with the direction then you're stuck.
            • cess11 11 days ago
              Why would MICROS~1 hate be something bizarre?

              I find such hate quite understandable. I might not share it, but I would not put any trust in MICROS~1 either, especially not in a professional setting.

              When I develop applications for MICROS~1 platforms, I do it in Java, betting that the Eclipse foundation and a plethora of corporations, including MICROS~1, aren't going to suddenly team up and do something nasty to the language and important libraries. With MICROS~1 as the lone manager of a language and most of its 'ecosystem' this seems like a real risk.

              • naasking 11 days ago
                > With MICROS~1 as the lone manager of a language and most of its 'ecosystem' this seems like a real risk.

                Why? All of the code is MIT licensed. If MS does anything untoward, forking it is trivial.

                • cess11 11 days ago
                  Sure. Organising a community to care for the fork is not.
                  • naasking 11 days ago
                    Ximian and Mono were built almost immediately after .NET was released and provided a decent multiplatform community-supported alternative for years, but you somehow think that forking an existing and working codebase would be more difficult than a completely independent reimplementation?

                    Look, there are obvious differences in how Java and the JVM are maintained and evolved as compared to .NET, but to continue claiming that these differences really matter all that much is just FUD, to put it politely.

                    • cess11 11 days ago
                      I made no such comparison.

                      No, it's not. Is Mono independent today?

            • Nullabillity 11 days ago
              So far I haven't had jdb refuse to open because it wasn't running inside of an Oracle-provided mystery meat build of NetBeans.
          • nickpsecurity 11 days ago
            Oracle’s lawsuits, esp copyrightable API’s, makes me default on staying away from anything they make. I’d leave any platform they acquire, too. There’s many open platforms whose owners or backers aren’t as aggressive in lawsuits as Oracle is. Many have strong communities of contributors, too.

            The only reason I’d use or contribute to Oracle I.P. is if I was paid to do so. The other is porting off Java but that’s legally risky if minima changes. CPAChecker was the one I was thinking of. Past that, I’d try to avoid Java.

    • binary132 11 days ago
      I mean, Go and Java have those properties, but achieve them using managed runtimes.

      The first thing that crossed my mind reading this comment was Nim. It’s pretty Pythonesque (but that’s kinda what I hate about it.)

      • gwd 11 days ago
        I love Go, but it certainly doesn't have the safety guarantees that Rust has. You're not going to have a memory leak or use-after-free, due to the garbage collector, but you have nil pointer dereferences and data races.
        • usrnm 11 days ago
          I have seen more leaks in the golang project I currently work on than in most C++ codebases I've worked with in my career. It's trivial to leak stuff in golang, and much more difficult to make sure that you don't
          • binary132 11 days ago
            Ok? Is it not trivial to leak in unsafe rust? Managed memory is safety. And frankly, OOM termination is not unsafe. If you crash the host because you ate all the system resources, that’s a different thing, and Rust doesn’t fix that either.
        • binary132 11 days ago
          Also, data races in Go are trivially addressed using the common pipeline / CSP pattern, since channels are serializing. Of course, it doesn’t eliminate the possibility of doing stupid concurrency things, and you could of course argue that giving users trivially easy concurrency primitives encourages stupid concurrency, but I think it is fair to say that in most cases, Go idioms and primitives make race-free concurrency pretty easy. You have to go out of your way to write racy Go, and there’s nothing really preventing people from going out of their way to write unsafe Rust, either.
        • pdimitar 11 days ago
          And it's trivially easy to ignore an error.

          Also channels and WaitGroups are easy to misuse (well, that's why there's also an ErrGroup now).

          I'd make Golang my 100% go-to language if it got rid of several footguns.

          Though let's be real, that's absolutely certain to never happen, they take their backwards compatibility extremely seriously. So we'll have to keep relying on checkers and linters that alleviate some of the pains somewhat.

          • angra_mainyu 11 days ago
            I'm not aware of any serious Go project without linting and tests.

            In fact, most projects I've worked on won't even merge a PR with ignored errors.

            Also, `go test -race` is a Go feature, there's nothing wrong with using it.

            • dgacmu 11 days ago
              go test -race is awesome and everyone should use it.

              But it's worth remembering that it's a dynamic analysis that only covers access patterns created by your tests.

        • angra_mainyu 11 days ago
          Strange, I've worked on complex projects in the distributed space with Go and it's very easy to catch these things with `-race` and linters.
        • binary132 11 days ago
          nil dereference isn’t unsafe.
          • imtringued 10 days ago
            It's undefined behaviour, which is even worse!
            • binary132 9 days ago
              Nope. Nil deref in Go is checked, and simply panics, just like array access out of bounds.
    • nikhilsimha 11 days ago
      Nim?
    • melodyogonna 11 days ago
      Checkout Mojo
    • prmph 11 days ago
      What's actually needed is to start with JS, and:

      - Strip out the bad parts.

      - Add types, more functional features, a proper standard lib, and a standardized module system.

      - Allow compilation directly to performant machine or byte code.

      They can call it a different name if they want.

      • thfuran 11 days ago
        Why start with JavaScript?
        • prmph 11 days ago
          Why not?
          • thfuran 11 days ago
            Because it seems so very far away from your goal. What is it bringing to the table that makes it the best starting point?
            • prmph 11 days ago
              As a front-end interface, JS is pretty good.

              - It has simple, straightforward, and pleasant syntax/semantics. Except for a few tweaks here and there, there is no need to re-invent the wheel here.

              - It is pretty functional, all that's needed is to add everything-is-an-expression

              - It supports multiple paradigms (functional, imperative, OOP)

              - Since it has no types currently, a _modern_ type system can be feasibly added, supporting concepts like result/error instead of exceptions, maybe/option, pattern matching, etc.

              - It has easy-to-reason about async features

              - It has a huge ecosystem; it shouldn't be hard to port many high-quality packages to a similar language

              Now, the lower-level features are another matter entirely, but it shouldn't be too difficult to retrofit what we want, like multi-threading (maybe building on top of workers), utf-8 strings, and so on.

              • ctxcode 11 days ago
                If you change the language, you have no ecosystem. You cant say, it has a big ecosystem "if everyone ports their code". By that logic all languages have a big ecosystem. Anyhow. JS has much unexpected strange behaviour, i really would not recommend such a language in 2024.
                • prmph 10 days ago
                  So let's take out the "unexpected strange behavior", that's what all I'm saying
              • 63stack 10 days ago
                With so many sweeping changes you could take any currently popular scripting language as a base.
              • knighthack 10 days ago
                > It has a huge ecosystem; it shouldn't be hard to port many high-quality packages to a similar language

                ...Spoken like a person who's never coded anything meaningful or serious, if you can't even gauge the complexity and man-hours needed to create (much less port) high-quality packages between programming languages.

                • prmph 10 days ago
                  No need for that flippant response. You do not know the work I've done, so don't assume I don't know what I'm talking about.

                  Talking about "man-hours" tells me you think of software engineering like an assembly line, which it is not.

                  There are whole lot of factors that might influence the time needed for such porting; the keyword in my comment is "similar". If the new language is a superset of the original language, minus a few warts, I'm not sure what should take so much time. For one thing, the architectural design of the packages (which probably took most of the original effort) remains the same.