Modern C++ Won't Save Us

(alexgaynor.net)

326 points | by neptvn 1824 days ago

24 comments

  • jandrewrogers 1824 days ago
    A significant issue I have with C++ is that even if your code base is pure C++17, the standard library is a Frankenstein's monster of legacy and modern C++ mixed together that required many compromises to be made. A standard library that usefully showed off the full capabilities of C++17 in a clean way would have to jettison a fair amount of backward compatibility in modern C++ environments.

    I've noticed that more and more people like me have and use large alternative history "standard libraries" that add functionality, reimagine the design, and in some cases reimplement core components based on a modern C++ cleanroom. I've noticed that use of the standard library in code bases is shrinking as result. You can do a lot more with the language if you have a standard library that isn't shackled by its very long history.

    • near 1824 days ago
      Because C++ is my primary language, and I always work on my codebases alone, I dropped the standard library and implemented my own replacement. It's not at all practical for most I'm sure, but it allows me to evolve the library with new revisions of the C++ standard without being absolutely fixed on backward compatibility.

      One of the things I did for safety is that all access methods of all of my containers will bounds check and throw on null pointer dereferences ... in debug and stable mode. And all of that will be turned off in the optimized release mode, for applications where performance is absolutely critical. The consistency is very important.

      Whenever I get a crash in a release mode, I can rebuild in debug mode and quickly find the issue. And for code that must be secure, I leave it in stable mode and pay the small performance penalty.

    • masklinn 1824 days ago
      > A significant issue I have with C++ is that even if your code base is pure C++17, the standard library is a Frankenstein's monster of legacy and modern C++ mixed together that required many compromises to be made. A standard library that usefully showed off the full capabilities of C++17 in a clean way would have to jettison a fair amount of backward compatibility in modern C++ environments.

      Not to mention C++ does not really provide the facilities necessary for convenient, memory-safe and fast APIs[0].

      And as demonstrated by e.g. std::optional the standard will simply offer an API which is convenient, fast and unsafe (namely that you can just deref' an std::optional and it's UB if the optional is empty).

      [0] I guess using lambdas hell of a lot more would be an option but that doesn't seem like the committee's style so far.

      • jcelerier 1824 days ago
        > (namely that you can just deref' an std::optional and it's UB if the optional is empty).

        if that was not the case, `optional` would get exactly zero usage. The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.

        • masklinn 1824 days ago
          > The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.

          That's completely insane. If there's always a value in your optional it has no reason to be an optional, if there may not be a value in your optional you must check for it.

          • noselasd 1824 days ago
            Sure, but that's not the issue. You should be using a std::optional like e.g.

               if (my_optional) 
                     do_stuff(*my_optional);
            
            Here's one (explicit)conditional.

            However if the dereferencing, *my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed

            • simias 1824 days ago
              This is solved in Rust by letting you test and unwrap at the same time:

                  if let Some(obj) = my_optional {
                      do_stuff(obj);
                  }
              
              >However if the dereferencing, my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed

              So basically that turns C++ optional types into fancy linter hints which won't actually improve the safety of the code much.

              I understand C++'s philosophy of "you pay for what you use" but that's ridiculous, if you use an optional type it means that you expect that type to be nullable. Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test. That's just optimizing the wrong part of the problem.

              • maccard 1823 days ago
                You can also do it in a one-liner in C++ if you're using shared pointers:

                    if(auto obj = my_weak_ptr.lock())
                    {
                        do_stuff(obj);
                    }
                
                > Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test.

                The point is that I can choose _when_ to pay that cost (e.g. I can eat the nullability check at this point, but not at this point, and I can use more sophisticated tooling like a static analyser to reason that the null check is done correctly).

                Is it more error prone? yes. Does it allow for things to horribly wrong? yes. Is "rewriting it in rust" a solution? No. If I want to pay the cost of ref-counting, I can use shared/weak ptrs.

                • aliceryhl 1823 days ago
                  The rust code in question is not using reference counting.
                  • cpeterso 1823 days ago
                    Rust's borrow checker is like compile-time reference counting. Same benefit, but no run-time cost.
              • masklinn 1824 days ago
                > So basically that turns C++ optional types into fancy linter hints which won't actually improve the safety of the code much.

                C++'s optionals are less "safer pointers" and more "stack-allocated pointers" (nor to be confused with pointers to stack allocations).

              • noselasd 1821 days ago
                C++ gives you all the options as usual.

                  do_stuff(my_optional.value())
                
                Is also safe, it throws if the value is absent, the safety check is performed behind the scenes.

                But people might not want to throw an exception, so

                  if (my_optional) 
                         do_stuff(*my_optional);
                
                Must also be allowed. The consequence is someone can also just do

                  do_stuff(*my_optional)
                
                No safety check is done and you get undefined behavior if the value is absent.

                I don't know rust so I suspect it has a language construct which c++ lacks that prevents you from doing

                  let Some(obj) = my_optional 
                  do_stuff(obj);
                • steveklabnik 1821 days ago
                  Yes, you have to use if let, not let. That code would be a compiler error. (Specifically, a “non-exhaustive pattern” error.)
            • masklinn 1824 days ago
              Hence going back to the original issue I pointed:

              > C++ does not really provide the facilities necessary for convenient, memory-safe and fast APIs.

              > You should be using a std::optional like e.g. […] if the dereferencing, *my_optional, should be safe

              And once again a terrible API puts the onus back on the user to act like a computer.

            • comex 1824 days ago
              Nah. In simple cases like that, the compiler would always be able to optimize away an extra check, if such a check were present. After inlining operator bool and operator *, it would look something like

                  if (my_optional->_has_value)
                      if (my_optional->_has_value)
                          do_stuff(my_optional->_value);
                      else
                          panic();
               
              and the compiler knows that the second if statement will pass iff the first does.

              On the other hand, if the test is further away from the dereference, and perhaps the optional is accessed through a pointer and the compiler can't prove it doesn't alias something else, it might not be able to optimize away the check. However, that probably doesn't account for too high a fraction of uses.

            • ayosec 1824 days ago
              How is that different to this?

                  if (my_pointer != NULL)
                      do_stuff(*my_pointer)
              • humanrebar 1823 days ago
                Native pointers do a bunch of different things depending on the context. In contrast, optional has clear semantics.

                For instance, the ++ operator doesn't work for std::optional. For a native pointer, you just have to know (how?) not to use it.

              • w0utert 1823 days ago
                In terms of generated coded, it is exactly the same. But that's not the point of optional types.

                The point of optional types is to force you to write checks for undefined values, otherwise your code will not compile at all. In the old-fashioned style of your example, you might forget to check for the possibility of a null pointer/otherwise undefined value, and use it as if it were valid.

                • 0xffff2 1823 days ago
                  But the whole genesis for this comment chain is that you can make exactly the same mistake with std::optional.
                  • w0utert 1818 days ago
                    Only if you deliberately unwrap the optional, which means you either don’t know what you are doing (in which case no programming language feature will be able to save you), or that you’ve considered your options and decided you want to enter the block knowing some variable can be undefined.

                    IMO, that’s not the same as not having optionals at all, and writing unconditional blocks left and right that may or may not operate on undefined values. It super easy to just dereference any pointer you got back from some function call in C++, without paying attention. Optionals force you to either skip the blocks, or think about how to write them to handle undefineds. Also, it’s ‘code as documentation’ in some sense, which I’m a big proponent of.

                    • masklinn 1809 days ago
                      > Only if you deliberately unwrap the optional

                      "Deliberately unwrap the optional" is the exact same thing as "deliberately unwrap the pointer", you just deref' it and it's UB if the optional / pointer is empty.

                      C++'s std::optional is not a safety feature (it's no safer than using a unique_ptr or raw pointer), it's an optimisation device: if you put a value in an std::optional you don't have to heap allocate it.

                      > It super easy to just dereference any pointer you got back from some function call in C++, without paying attention.

                      And optionals work the exact same way. There's no difference, they don't warn you and they don't require using a specific API like `value_unchecked()`. You just deref' the optional to get the internal value, with the same effects as doing so on any other pointer.

    • millstone 1824 days ago
      I agree with this and would take it a step further, and say that recent changes to the STL are the worst parts of modern C++. For example std::regex supports 6 distinct syntaxes, the PRNG stuff is massively over-engineered, the "extensions for parallelism" add complexity without giving enough knobs for any real perf improvement. Meanwhile there's gaping holes like UTF-8 support. It's a sad state.
      • blt 1824 days ago
        How is the prng over engineered? I agree its a little clunky for casual use but it makes all the right decisions, imo, for serious use of prngs (e.g. reproducible experiments for Monte Carlo methods in simulation and statistics)
        • petters 1824 days ago
          Initializing the mersenne twister is really hard: https://github.com/PetterS/monolith/blob/master/minimum/core...

          Edit: There are two links in the code with more info.

        • rcxdude 1824 days ago
          The problem is there's no easy to use sensible defaults, just a confusing bunch of options with a bunch of apparently easy but subtely wrong ways to use it. Having the power is useful, but I would also just like a rand() (or better a randrange()) which actually works.
        • patrec 1824 days ago
          > it makes all the right decisions, imo, for serious use of prngs

          Apart from an awkward API that's hard to use correctly, Mersenne Twister which is basically the main generator has been obsolete for years (bad quality RNs, slow, huge state, ...).

        • sorenjan 1823 days ago
          What's the modern C++ equivalent to C's

              (rand() % (b - a)) + a;
          
          or Python's

              random.randint(a, b)
          
          Easy to use and often good enough.
          • whyever 1823 days ago
            > (rand() % (b - a)) + a;

            This is no longer uniform, because it introduces a bias towards small numbers.

            • sorenjan 1823 days ago
              Yes, it's less than ideal. But like I said, often good enough. Sometimes you just want a simple way to get something approximately random, the actual distribution might be unimportant.
    • colanderman 1824 days ago
      What parts specifically? By my estimation, the only non-deprecated part of the standard library that really reeks of pre-C++11 (what I believe most consider the advent of "modern") is iostream. Most of e.g. the containers have been kept up to date with new features of the language (e.g. move semantics, constexpr).

      The standard library certainly is lacking things which are commonly used (say, JSON parsing or database connection), but I think this is a conscious decision (and IMO the correct decision) to include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics. There's rhyme and reason to most of the most commonly used elements that is decidedly lacking from e.g. Python's (much more extensive) standard library.

      • geezerjay 1824 days ago
        > The standard library certainly is lacking things which are commonly used (say, JSON parsing or database connection),

        I strongly disagree. It's quite obvious that the C++ standard library does not need to add support for "common things", because they already exist as third-party modules.

        In fact, this obsession to add all sorts of cruft to the C++ standard is the reason we're having this discussion.

        If there is no widely adopted JSON or DB library for C++ then who in their right mind would believe it would be a good idea to force one into the standard?

        And don't get me started on the sheer lunacy of the proposal to add a GUI library. Talk about a brain-dead idea.

        People working on other programming language stacks already learned this lesson a lot of time ago. There's the core language and there's the never-ending pile of third-party modules. Some are well-made and well thought-out, others aren't. That doesn't matter, because these can be replaced whenever anyone feels like it. This is not the case if a poorly thought-out component is added to an ISO standard.

        • barrkel 1824 days ago
          Standard libraries shouldn't include "leaf" modules, but probably should include interface / adapter modules. So no to JSON, but maybe yes to a serde interface. No to a database driver, but maybe yes to an interface like JDBC.

          Without common interfaces, flexibility in implementation is much more expensive, and innovation suffers too, as new things are harder to get off the ground without existing code that they can cheaply plug into.

          • daemin 1823 days ago
            There's an argument to be made for having basic so-called leaf modules in the standard library. That is it makes it far simpler to get a basic installation of C++ and start doing cool things with it. Experienced developers or people that need domain specific features would be using their own specialised libraries anyway.

            So instead of trying to figure out which one of the dozens of GUI frameworks to use in making a window and have it change colour, you just write it using the standard library. You want to do a HTTP request, then there will be code in the standard library for that.

            It will also save work trying to figure out which third party library to use when you want to do these things locally on a small test project.

            • geezerjay 1823 days ago
              > So instead of trying to figure out which one of the dozens of GUI frameworks to use in making a window and have it change colour, you just write it using the standard library.

              Congrats, now you're stuck with something like Xwindows or MFC or AWT.

              • daemin 1823 days ago
                They're standard for the OS but not a standard for the programming language library.

                Granted AWT wasn't great but you could still make a GUI with it straight out of the box. It allowed you to make windows and buttons and start exploring the programming language.

                Like I said having a standard library option won't eliminate third party libraries, it will just provide something in the box for people to start using straight away.

              • pjmlp 1823 days ago
                Which while not ideal, are guaranteed to be present, contrary to third party libs.
                • geezerjay 1822 days ago
                  That assertion is disingenuous at best.

                  That guarantee is only achievable at the expense of forcing compiler developers to maintain a GUI toolkig for all platforms. Who in their right mind believes that's reasonable or desirable?

                  • pjmlp 1822 days ago
                    Everyone that wants a language to thrive instead of dealing with thousand incompatible implementations.

                    Many C++ targets don't support IO or networking, so lets not burden embedded compiler developers with standard library bloat.

                  • daemin 1822 days ago
                    So instead of a GUI library what about a HTTP or a network library as part of the standard? Surely handling TCP and UDP connections is an order of magnitude easier to implement and maintain.
        • coldtea 1824 days ago
          >I strongly disagree. It's quite obvious that the C++ standard library does not need to add support for "common things", because they already exist as third-party modules.

          It's not obvious to me at all.

          In fact, if that was a valid argument, it would be for C++ not having a standard library at all, as everything (including vectors, strings, etc) also exists as "third-party modules".

          • geezerjay 1823 days ago
            > In fact, if that was a valid argument, it would be for C++ not having a standard library at all

            Putting aside the continuum fallacy, it's easy to understand how the C++ would be better served by having access to a collection of third-party components instead of repeating C's and even Java's mistakes.

            The Boost project is a very good example, so as the wealth of JSON and XML parsers.

            In fact, this lesson is so blatantly obvious that essentially all mainstream programming languages simply adopt official package managers and leave it to the community to develop and adop the components they prefer.

            • coldtea 1823 days ago
              >Putting aside the continuum fallacy, it's easy to understand how the C++ would be better served by having access to a collection of third-party components instead of repeating C's and even Java's mistakes.

              Java is very well served with its library. It would have been nowhere near as successful without it.

          • humanrebar 1823 days ago
            Third party modules would be a huge mess if there weren't at least common interface types like std::string_view and std::unique_ptr.
        • adrianN 1824 days ago
          I believe that C++ needs a fat standard library because using third party libraries is a bit cumbersome in C++. Alternatively there could be a blessed build system that makes third party library integration as easy as Cargo or Go Modules.
          • Maken 1824 days ago
            I would say that CMake pretty much covers that. What it lacks is somehow a central registry, but I think C++ never intended to have one.
            • coldtea 1824 days ago
              What C++ intended and what C++ should have intended are two different things.
        • colanderman 1823 days ago
          > I strongly disagree

          No you don't. Read the rest of the sentence you quoted :)

        • gpvos 1824 days ago
          You're not disagreeing with colanderman:

          > (and IMO the correct decision)

      • Althorion 1823 days ago
        > include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics

        Can you, from the top of your head, tell me what irregular modified cylindrical Bessel functions are and the last time you needed to use one? And yet, they were included in the standard library in C++17: https://en.cppreference.com/w/cpp/numeric/special_math/cyl_b...

        • colanderman 1823 days ago
          I can't, but I bet they have a standard and well-accepted definition in the mathematical community.

          In fact, pretty much any real-valued mathematical function passes the test.

          The interface is settled, almost by definition since C++ functions are inspired by mathematical functions: pass in arguments, return result. Use range/domain exceptions or NaN for reporting such errors.

          The semantics are obvious: compute the named function.

          The interface is lowest-common-denominator: include float, double, and long double overloads.

          In fact, the same or similar interface is used in almost every language I've encountered. To contrast, the same is absolutely not true of e.g. a database module. I don't think I've ever seen two alike, disagreeing over even basic things such as whether the cursor or the transaction is the basic unit of interaction.

          • Althorion 1823 days ago
            There’s nearly a limitless amount of standard and well-defined functions with a single usage, like those. There’s hardly a point in implementing them in the standard library and C++ is the only language that I’m aware of that has those.

            If the goal was to create a specialized library for solving differential equations, those would be handy there. But if not, even if you tried implementing everything that you could potentially think of to implement, there are hundreds of things that are orders of magnitude more useful to have and equally well-defined and standardized—even if we limit ourselves to mathematics alone, I’d much rather see basic constants like π or e included, or quaternions, or arbitrary precision integers, or decimal numbers… or dozens upon dozens of other things before that.

            But mainly, I find it impossible to maintain the claim that any general-usage language, like C++, that implements such niche functions is trying to keep its standard library small and ‘include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics’.

            • colanderman 1823 days ago
              Then you're not disagreeing with me, because those functions pass my test as I demonstrated above.

              Why do these functions bother you so much? It can't be namespace pollution; they're under std::. It can't be that you disagree with their interface or semantics, since by your own admission you don't even know what they are.

              You named some other features, such as quaternions, that you think would be better for implementors to spend their time on, but surely you can imagine someone like yourself who is tired of having to define the Bessel functions every time they start a new project, and can't imagine why the C++ committee saw fit to include something so useless and obscure as quaternions before getting to Bessel functions.

              • Althorion 1823 days ago
                > Then you're not disagreeing with me, because those functions pass my test as I demonstrated above.

                Yeah. I must have misunderstood your definition of ‘obvious’—I though you meant ‘an obvious inclusion to the standard library’, not ‘having an obvious definition’. The definition is obvious, why they should be in a standard library is not.

                > […] since by your own admission you don't even know what they are

                I mostly do—I studied mathematics. Or, to be more precise, I learned about them, then never used them in programming, had to remind myself what they were and even after that, I don’t find them useful enough to warrant an inclusion to the standard library. Thus, since they were included, I think that’s a good evidence of C++ committee not trying to keep its standard library concise.

                > You named some other features, such as quaternions, that you think would be better for implementors to spend their time on, but surely you can imagine someone like yourself who is tired of having to define the Bessel functions every time they start a new project, and can't imagine why the C++ committee saw fit to include something so useless and obscure as quaternions before getting to Bessel functions.

                The thing is, I can’t. If you use them, you want a better support for solving differential equations than C++ offers anyway, so it’s more of a ‘OK, I have this small part already implemented, but I still have to find ways of doing the rest 95%’. This, plus the fact that I’m quite certain that people using C++ to do 3D geometry outnumber people using it for solving differential equations by a few orders of magnitude—a cursory glance at GitHub showed me that the only projects in C++ that mention it are… implementations of a standard library (and forks upon forks of those).

                My problems with this is that C++ is now in a very strange place—it implements some very high level, niche features, bloating the language and its implementations (the size of glibc is a practical problem) while still lacking many others, that seem much more ‘obvious’ (i.e. ‘if given an unknown language, I would be much less surprised to find them included in its standard library). In the end, I have a language that both has annoyingly big standard library and heavily relies on other, non-standard ones for quite a lot of things.

            • majewsky 1823 days ago
              > C++ is the only language that I’m aware of that has those.

              https://golang.org/pkg/math/#J0

      • int_19h 1823 days ago
        Surely JSON does have a settled, obvious, lowest-common-denominator semantics?
        • colanderman 1823 days ago
          Of the design of a parsing and encoding library? Not at all. Do you parse as a stream or all in one go? Are values represented as a special "json" type, or as built-in types? How should arrays and objects be represented? Are integers and reals different types? Are trailing zero decimals significant? Do you allow construction of arrays and objects in any order, or only sequentially?

          (Granted, I've written my own C++ JSON library which I believe answers all these questions in an intuitive way, following both the design principles of the C++ standard library, and the lowest-common-denominator semantics of JSON, but it's sufficiently opinionated that I doubt I could convince any significant portion of C++ users that it's the "right" way to do things. Even if it "is", demonstrating such is nowhere near as easy as it is for unique_ptr, vector, string, thread, etc., each of which are more or less the "obvious" designs given certain constraints such as RAII to which the standard library adheres.)

    • kabdib 1824 days ago
      I work in a shop where there was a significant effort in a cross-platform library a long time ago, but that old code has been showing cracks and emitting creaks ("Hey, folks, guess how many debugging hours it took to find out that lambdas didn't work here, either"). Use of the standard library is frowned upon except when absolutely necessary, so there's no avoiding the thing. From time to time someone will joust at it and pull a particularly screwball section forward a decade or two, but on the whole the old stuff is just never going away short of a catastrophe. It makes onboarding interesting, and it makes you reflect philosophically on expertise that is valuable absolutely no place else.

      I work on other projects, or on my own stuff at home, and I can breathe again. I don't always need reverse iterators on a deque, but dammit they are there if I need them.

      However, I have been in too much C runtime code to be entirely happy. I've seen too many super-complicated disasters, for instance the someone who really wanted to write the Great American OS Kernel but who wasn't allowed on the team, and so had to make their bid for greatness in stdio.h instead. You learned to tread carefully in that stuff, the only good news being that if you broke something it might have turned out to be already busted anyway and no harm done, philosophically speaking, I mean.

      There are no good answers :-)

    • fsloth 1824 days ago
      So the language is evolving, it is used by projects that are old and still in good enough shape so one can adapt their concepts to some new things, and as a sugar on top, it does not break bakcwards compatibility.

      As such it just sounds like a mature technology which a huge adopted base and is still holding traction. Generally maturity, traction and adaptability can be considered indicators of health and not malady.

      Beauty is overstated. Engineering can be art but it doesn't have to be.

      Jokes aside, I use C++ daily and see it as Warty McWartface and could spend a long time ruminating about it's faults. But adapting old stuff to new boundaries is always going to be messy. Generally rewriting history creates more problems than solves them.

    • fooker 1824 days ago
      I don't see the problem. You are free to use such a modern library (Google does, it's called absl).

      The good thing here is that the standard library doesn't require 'magic' to be implemented (unlike Swift where the standard library relies on hidden language hacks).

      • nimrody 1824 days ago
        The difficulty here is combining multiple libraries each using its own abstractions.

        For example, since the standard library does not have a Matrix class suitable for numerical applications (or maybe it does today...) using multiple libraries each with its own Matrix class is difficult. Multiple libraries are needed since one library may not contain all numerical algorithms one may require for a given app.

        This is not a problem for Google where I assume everyone is using internally written code -- but is a problem for most of us.

        • jcelerier 1824 days ago
          > For example, since the standard library does not have a Matrix class suitable for numerical applications (or maybe it does today...) using multiple libraries each with its own Matrix class is difficult.

          well, Python comes with a builtin "matrix-like" array type and yet it's not the one which is the most used in scientific computation.

          • aldanor 1823 days ago
            Python provides the buffer interface, however (which `array` module implements), which links Python's buffers and memoryviews to numpy arrays to multiple other 3rd-party array-like and table-like types and structures.
          • RayDonnelly 1823 days ago
            .. because it's (relatively speaking) brand new?
      • nradov 1824 days ago
        Sure it's possible to use a non-standard "standard library". But at that point you're already halfway to using a different language so why not consider switching from C++ to D / Rust / Go?
        • ncmncm 1824 days ago
          The whole point of C++ is that it enables writing more powerful libraries, capturing semantics in libraries that can then just be used. C++ is still quite a lot more powerful for this purpose than Rust. Rust will get better at it, over time, but it has a long way to go and C++ is not siiting still.

          Rust is still a niche language, and if its rates of adoption and improvement do not keep up, it will remain a niche language, and fade away like Ada.

          I cannot imagine a serious programmer switching from C++ to Go. If you can, you have a much livelier imaginary life than I do.

          • ajxs 1824 days ago
            The Ada partisans are all out in force here in this thread to defend Ada, all four of us. haha... For what it's worth, niche as Ada may be, it's an _important_ niche. It remains widespread in safety-critical applications, and isn't going anywhere anytime soon. It's really good to see Rust taking lessons from Ada/SPARK in the area of formal proofing! If any language is going to threaten C++, it looks like Rust. I don't expect an Ada resurgence to happen, unfortunately.

            > I cannot imagine a serious programmer switching from C++ to Go. If you can, you have a much livelier imaginary life than I do.

            This got a laugh out of me.

            • pjmlp 1824 days ago
              A large majority of Ada partisans found their corner in Java, C# and C++'s type system improvements over C, and made the best we could from the sour grapes of C's copy-paste compatibility.
              • ajxs 1824 days ago
                Calling myself an Ada partisan is a bit of a stretch. I've recently begun using it for embedded development, which is a domain almost completely dominated by C. That's the angle I'm coming in from.
                • pjmlp 1824 days ago
                  It depends on each one naturally.

                  For me, coming from Turbo Pascal 3 - 6, it allowed me in 1993 to use a language with a similar level of safety and language features, instead of having to deal with PDP-11 constraints.

                  I was always the weird kid that asked the professors if I could deliver C projects written in C++ instead, which thankfully some of them did accept.

                  Specially given that at my degree, C was already out of fashion by the early 90's. First year students got to learn Pascal and C++, and were expected to learn C from their introduction to C++ programming.

          • nicoburns 1823 days ago
            > Rust is still a niche language

            Only just barely at this point. It has significant projects from a lot of the largest companies (Google, Microsoft, Amazon, etc). Firefox is using it, Dropbox is using it, Red Hat is using it.

            • ncmncm 1823 days ago
              In five years it might be just barely a niche language. In ten, if it catches on, it won't be.

              If it does, its users will have come over from Java, C#, and C.

          • rootlocus 1824 days ago
            Is this the "no serious programmer" fallacy?
            • Y_Y 1824 days ago
              No true hn commenter would make such a mistake.
          • machinecoffee 1823 days ago
            From what I can see, the whole point of C++ is to wrap existing C libraries and call them OO :-)
        • fooker 1824 days ago
          No, using a library is not halfway to using a different language.

          Languages exist to allow you to define your own layers of abstractions. The language choice ideally reflects what abstractions are useful for your project.

        • geezerjay 1824 days ago
          > But at that point you're already halfway to using a different language

          This statement makes no sense at all. Using a third-party library that's not specified by the same ISO standard that specifies the core languagr does not create "a different language".

          It just means you're actually using the programming language to do stuff.

          This isn't the case even if someone uses a toolkit that relies on preprocessor tricks to give the illusion of extending the core language, such as Qt.

        • mempko 1824 days ago
          C++ is designed for people to make nice libraries. Unlike other languages there is nothing special about the standard library (no magic language hacks). All libraries are first class citizens by design.
          • usrnm 1824 days ago
            Good luck implementing something like std::is_standard_layout without "magic language hacks". No, not all libraries are made equal and std is part of the language now, there is no way back
            • geezerjay 1824 days ago
              You've cherry-picked a type trait as your example, which arguably could be a core language feature made to look like a third-party module.

              Meanwhile, do you believe it's hard to implement a container?

              And no, adding cruft to the STL is not a one-way street. See for example the history of C++'s smart pointers.

              • int_19h 1823 days ago
                It is pretty hard to implement a container with all the precise invariants and guarantees that the Standard requires.

                But more to the point, your implementation might still not be as fast as the standard library one, because the standard library can make assumptions about the compiler that you cannot in portable code - what is UB to you might be well-defined behavior to stdlib authors. Thus, for example, they might be able to use memcpy for containers of stdlib types that they know are safe to handle in that manner.

            • fooker 1824 days ago
              A look into the type_traits header reveals that is_standard_layout is implemented with standard C++.
              • Kranar 1823 days ago
                • fooker 1823 days ago
                  It checks if a non standard feature is available, otherwise falls back to a standard implementation.

                  My argument was that it was possible to implement it with standard C++.

                  • Kranar 1823 days ago
                    There's no way to implement that type trait using standard C++. The implementation does check if a non-standard feature is available, and if not it delegates to is_scalar which in turn delegates to is_enum which in turn delegates to is_union. is_union can not be implemented in a standard conforming manner without compiler support and libc++ unconditionally returns false if compiler support is not available, which does not conform to the standard.
        • pjmlp 1824 days ago
          Lack of tooling for HPC, GPGPU, mixed language graphical debugging across Java and .NET, native support for COM/UWP, game engines like Unreal, CryEngine and Unity.
        • Const-me 1823 days ago
          Many libraries only expose C or C++ APIs. Some of these libraries are hard requirement, like OS kernel APIs or GPU APIs.

          Insufficient SIMD support in other languages, Intel only supports their intrinsics in C and Fortran.

          Tools for C++ are just too good, IDEs, debuggers, profilers.

      • Razengan 1824 days ago
        What are the hidden language hacks in Swift?
        • Someone 1824 days ago
          I wouldn’t call them hacks, but there are things in the runtime that you can’t implement yourself in Swift. Examples (corrections welcome):

          - you can’t allocate memory and then turn it into an Swift object.

          - you can’t write Decodable in pure Swift (reflection isn’t powerful enough to do “set the field named “foo” in this structure to “bar”)

          - reference counts are hidden from Swift code (yes, there’s swift_retainCount to read them, but that’s documented as returning a random number (https://github.com/apple/swift/blob/master/docs/Runtime.md) because it should not be used). So, if the compiler emits more reference count logic than needed in the data structure that your library uses, there’s no way to improve on it.

          • fooker 1824 days ago
            There are a lot more of these, I can't find a comprehensive list unfortunately.
      • rimliu 1824 days ago
        What are those "hidden language hacks" in Swift?
    • hak8or 1824 days ago
      Can you or others post such alternative standard libraries? The only ones that come to mind are boost (which is a nightmarefor compile times and I feel is a mishmash of old and new) and googles absiel which I haven't actually tried enough to make an opinion about.
      • geezerjay 1824 days ago
        > Can you or others post such alternative standard libraries?

        POCO comes to mind.

        https://pocoproject.org/

        • jcelerier 1824 days ago
          Also Qt is basically a Java-like library, with bulit-in stuff for networking, GUI, XML, JSON, WebSockets, multimedia, etc.

          Or you have some "domain-specific" libraries like OpenFrameworks which is very nice if you are making visual art since it comes with a lot of very simple primitives to draw shapes, etc.

      • humanrebar 1823 days ago
        ACE was an attempt that is basically dead.

        There was some talk about an std2, but I gather support for it is too low to be pursued seriously.

    • lallysingh 1824 days ago
      What legacy? It's not like there was a single "before time." There are problems coming up with all of it, because the underlying runtime model provides too few guarantees. We'll be plugging holes the rest of our natural lives.
      • millstone 1824 days ago
        No, the problem is NOT the underlying runtime model. In fact it's often the opposite: the STL tries to provide too much.

        An excellent example is std::unordered_map. This type was introduced to address perf problems with std::map. But unordered_map forces closed addressing, separate allocation, etc. which limit its performance. In return you get stronger iterator invalidation guarantees but these are rarely useful. Meanwhile Abseil's swiss tables, LLVM's DenseMap, etc. illustrate what a high-performance C++ hash table could be.

  • namirez 1824 days ago
    This has been discussed extensively in the C++ community. I think if you need a very safe code, you shouldn't use the string_view or span without thinking about the potential consequences. These are added to the language to prevent memory allocation and data copy for performance critical software.

    Herb Sutter has concrete proposals to address this issue and Clang already supports them: https://www.infoworld.com/article/3307522/revised-proposal-c...

    • tatersolid 1824 days ago
      > think if you need a very safe code, you shouldn't use the string_view or span without thinking about the potential consequences.

      That’s the whole point: your caveat shows that’s it’s C/C++ which are unsafe in their very nature and therefore should not be used in code exposed to potentially malicious (e.g. user or network) input. Which is just about everything useful.

      HPC are generally closed systems and have different threats, but the industry just needs to run (not walk) away from C/C++ for the majority of use cases.

      • ncmncm 1824 days ago
        There is no such language as C/C++. There is C, which cannot be written safely, and there is C++, which can be, and quite often is.

        It has been many years since I shipped a memory bug in C++. It is just not a real worry for me. I am constantly dealing with design, specification, and logic flaws, which affect Rust equally, or moreso.

        I am aware that there are plenty of other programmers out there, writing bad code in what they would call C++. I would like them to write good code. If it takes Rust to make them write good code, so be it. But if they began writing decent C++ code, that is just as good.

        The threshold is not zero memory errors. The threshold is many fewer memory errors than logic or design errors. The more attention your language steals from logic and design, the more of those errors you will have. Such errors have equally dire consequences as memory errors, and are overwhelmingly more common in competent programmers' code, in C++ and in Rust.

        C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library. Every time I use a powerful, well-tested library instead of coding logic by hand because it can't be captured in a library, that is another place errors have no opportunity to creep in.

        So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

        Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

        Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

        You might disagree, but it is far from obvious that you are correct.

        • Tracist 1824 days ago
          > It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

          The whole comment sounds so much like well written satire, but I think he's being serious.

          • vasilipupkin 1824 days ago
            I agree with him. in many practical applications with well design class hierarchies it just really isn't much of an issue. Hasn't been for me either.
            • fetbaffe 1824 days ago
              > with well design class hierarchies

              :eyes:

              • vasilipupkin 1823 days ago
                you can roll eyes at me all you want, but I've been programming in C++ for a long time. These memory access issues just don't seem to be a big problem for us in practice. That's because we wrap all raw memory manipulation in appropriate classes for our application, so it's just not an issue. I agree it could be an issue in theory.
              • ncmncm 1823 days ago
                He rolls his eyes at "hierarchies". Libraries do make the difference.

                Somebody else interjected Design Patterns. You can define a design pattern as a weakness in your language's ability to express a library function to do the job.

              • aldanor 1823 days ago
                ... and with proper use of Design Patterns!
          • paulmooreparks 1824 days ago
            Why is it difficult to believe? I've also written plenty of C++ code without memory bugs. It's not that hard if you play by a few simple rules.
            • dodobirdlord 1824 days ago
              > I've also written plenty of C++ code without memory bugs.

              The classic response to this is "That you know of." Consider that even quality-conscious projects with careful code review like Chrome have issues like this use-after-free bug from time to time.

              https://googleprojectzero.blogspot.com/2019/04/virtually-unl...

              So when people claim that they personally don't write memory bugs I tend to assume that they are mistaken, and that the real truth is that they haven't yet noticed any of the memory bugs that they have written because they are too subtle or too rare to have noticed.

              • tpolzer 1824 days ago
                Chrome is in an exceptionally hard place because of its JIT. Your language cannot tell you if it's safe for your JIT to omit a bounds check.
                • comex 1824 days ago
                  That post describes two vulnerabilities: one is in the JIT, but the other one is in regular old C++ code. More generally, JIT bugs are a relatively small minority of browser vulnerabilities. More often you see issues like use-after-free in C++ code that interacts with JS, such as implementations of DOM interfaces, but the issues are not directly JIT related and would be avoided in a fully memory-safe language.
                  • ncmncm 1823 days ago
                    Chrome, like Firefox, is not an example of modern C++ code. Google's and Mozilla's coding standards enforce a late-'90s style. It is astonishing they get it to work at all.
              • paulmooreparks 1823 days ago
                In this case, I mean a subsystem that has been in production since 2006 and has been processing hundreds of thousands of messages a day. I don't claim that it's perfect or bug-free, but if it had significant memory errors I'd have heard about it. I designed and implemented it to use patterns like RAII to manage memory, and it's worked quite well.
              • gmueckl 1824 days ago
                That is why use tools like valgrind to verify that you got it right.
                • esrauch 1824 days ago
                  When I worked on a mobile C++ project at Google, we went exceptionally out of our way to avoid memory issues.

                  We ran under valgrind and multiple sanitizers (and continuously ran those with high coverage unit and integration tests). We ran fuzzers. We had strictly enforced style guides.

                  We still shipped multiple use after frees and ub-tripping behavior. I also saw multiple issues in other major libraries that we were building from source so it can't be pointed at as just incompetency on my team.

                  Basically, it might be possible but I think it's exceptionally more difficult to write memory safe C++ than this thread is making it sound.

                  • gmueckl 1824 days ago
                    Writing memory safe programs in C++ is possible. Most coding styles and some problem domains don't lend themselves to it naturally, though. In my experience, restricted subsets used for embedded software vastly reduce the risk of introducing errors and make actual errors easier to spot and fix.
                    • sgift 1824 days ago
                      > Writing memory safe programs in C++ is possible.

                      Everything "is possible" in the sense that in theory you can do it. But if time and time again people fail to do it. Even people who invest almost heroic levels of effort (see above: valgrind, multiple sanitizers, and so on) you get to the point where you have to accept that what is possible in theory doesn't work in practice.

                      • gmueckl 1823 days ago
                        I have seen it done in practice, on rather large systems. But it requires actual, slow software engineering instead of the freestyle coding processes that are used in most places.
                        • paulmooreparks 1823 days ago
                          My main rule is "no naked new," meaning that the only place the new operator is allowed is in a constructor, and the only place delete is allowed is in a destructor (unless there's some very special circumstance). This style lends itself to RAII. The other rule is to use the standard library containers unless there's a very good reason not to do so. That seems to cover most of the really basic errors.
                  • ncmncm 1823 days ago
                    Yes, I know how you code are obliged to code at Google. It is astonishing that anything works.

                    The "strictly enforced style guides" strictly enforce '90s coding habits.

                • adrianN 1824 days ago
                  Together with a test-suite that covers the exponential number of paths through your code...
                  • gmueckl 1824 days ago
                    Changing programming language neither reduces the need for test coverage nor does it magically increase coverage.
                    • adrianN 1824 days ago
                      A type system changes the need for test coverage because it eliminates whole classes of bugs statically that would need an infinite amount of tests to eliminate dynamically.
                      • gmueckl 1824 days ago
                        That leaves an infinite amount logic bugs to be tested for. Types cannot fix interface misuse at integration and system level. So no, this does not reduce the need for testing.
                        • comex 1824 days ago
                          Whether they reduce the need for testing overall is arguable. But what matters in this discussion is that types can guarantee memory safety, meaning that the cases that you forgot to test – and there will always be such cases, no matter how careful you are (just look at SQLite) – are less likely to be exploitable.
                          • gmueckl 1824 days ago
                            Types can only provide limited memory safety. There is a real need to deal with data structures that are so dynamic as to be essentially untyped. Granted, this usually happens in driver code for particularly interesting hardware, but it happens. Also, I have not yet seen a type system that is both memory safe and does not prohibit certain optimizations.
            • kibibu 1823 days ago
              I haven't written c++ seriously for a number of years. Do you still have to do all that "rule of three" boilerplate stuff to use your classes with the STL? Is it better or worse now with move constructors?
              • micv 1823 days ago
                It's a bit better with C++11 syntax where you can use = delete to remove the default constructors/destructors, e.g.:

                  class Class
                  {
                      Class();
                      Class(const Class&) = delete;
                      Class& operator = (const Class&) = delete;
                      ~Class() = default;
                  };
                
                Which I find slightly cleaner than the old approach of declaring them private and not defining an implementation, but the concept hasn't changed much. I'd love a way to say 'no, compiler, I'll define the constructors, operators, and destructors I want - no defaults' but that's not part of the standard.

                Move constructors are an extra that, if I remember correctly, don't get a default version, thankfully.

              • ncmncm 1823 days ago
                So, so much better. Nowadays we "use" what has been called "rule of zero". Write a constructor if you maintain an invariant. Rely on library components and destructors for all else.
          • jcelerier 1824 days ago
            > https://jaxenter.com/security-vulnerabilities-languages-1570...

            there's a world in terms of safety between C and C++.

            • comex 1824 days ago
              The comparison in that link is pretty meaningless; it scores languages by how many vulnerabilities have been reported in code written in them, without even making an attempt to divide by the total amount of code written in them, let alone account for factors like importance/level of public attention, what role the code plays, bias in the dataset, etc.
              • eska 1823 days ago
                To be fair the report explicitly states this limitation. jcelerier just conveniently forgot to mention it.
            • eska 1823 days ago
              You're misrepresenting the report in order to justify your bias. Direct quote from the report:

                  This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.`
              
              In other words the report explains this with 1) there being more C code in volume and 2) more C code in security-relevant projects (which are reviewed more by security researchers). It also states explicitly that your conclusion is not to be drawn from this.
              • majewsky 1823 days ago
                Readable version of the quote:

                > This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.

                Please, never ever use code snippets for quotes, unless you hate mobile users. Just put "> " in front.

                • leetcrew 1823 days ago
                  > unless you hate mobile users

                  or just period. I'm reading this on a 4K desktop display, and I still have to scroll. it's only useful for actual code, which is very rarely posted on hn.

        • lmm 1824 days ago
          > It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

          Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it? Experienced C++ programmers do seem to learn how to avoid those bugs (although very often what they write is still undefined according to the standard - but e.g. multithreading bugs may be rare enough not to be encountered in practice). But that's of limited use as long as it's impossible for anyone else to look at a C++ codebase and confirm, at a glance, that that codebase does not contain memory bugs.

          > C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library.

          > So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

          Citation needed. What desirable constructions are impossible to express in Rust? I've no doubt that you can write some super-"clever" C++ that reuses the same pointer several different ways and can't be ported to Rust - but such code is not desirable in C++ either (at least not in codebases that more than one person is expected to use). Meanwhile Rust offers a lot of opportunities for libraries to express themselves clearly in a way that's not possible in C++: sum types let you express a very common return pattern much more clearly than you can ever do in C++. Being able to return functions makes libraries much more expressive. Standardised ownership annotations make correct library use very clear, and allow a compiler to automatically check that they're used correctly.

          > Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

          > Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

          > You might disagree, but it is far from obvious that you are correct.

          On the contrary, it's obvious from the frequency with which we see crashes and security flaws in C++ codebases that the average programmer who switches from Java to C++, or C# to C++ makes the world a worse place. It's overwhelmingly likely to be true for Rust to C++ as well.

          • ncmncm 1823 days ago
            >Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

            Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

            Rust has not caught up to C++'s ability to code powerful libraries, and might never. C++ is a moving target. C++20 is more powerful than C++17, which was more powerful than 14, 11, 03.

            There are certainly niches for less powerful languages. Rust is more powerful, and nicer to code in, than many that occupy those. It will completely displace Ada, for example.

            • lmm 1823 days ago
              > Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

              So if I find that a C++ project is using powerful libraries, I can be confident that it doesn't have memory errors? History suggests not.

              • ncmncm 1823 days ago
                If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

                Certainly not. Rust takes aim at memory errors, and misses the rest that would be avoided by encapsulating bug-prone code in libraries. C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

                That is not to say all C++ code is bug-free. Google and Mozilla code, by corporate fiat, is forbidden to participate.

                • lmm 1823 days ago
                  > If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

                  You can be confident that it doesn't harbour memory errors. You can be confident that it doesn't contain arbitrary code execution bugs, which is a much better circumstance than with any C++ project I've seen (C++ by its nature turns almost any bug into a security bug).

                  IME you can also have a much higher level of confidence that it does what you expect (including not having bugs) than you would for a C++ project, because of Rust's more expressive type system.

                  > C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

                  And yet in practice you can neither be confident that there are no memory bugs, nor that there are no other bugs. Even the big name C++ libraries are riddled with major bugs. Perhaps libraries that are written in a certain fashion avoid this bugginess, but that's of little use when it's not possible to tell from a glance whether a given library is one of the buggy ones or not.

                  • ncmncm 1823 days ago
                    This is the classic False Dichotomy.

                    Rust programs have bugs. Rust programs have security bugs. Are they mediated by memory usage bugs? Probably not, unless the program has unsafe blocks, or uses libraries with unsafe blocks, or libraries that use libraries that have unsafe blocks, or call out to C libraries. Or tickle a compiler bug.

                    Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

                    Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

                    So, yes, of course it can.

                    • lmm 1823 days ago
                      > Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

                      Sure, that class of bugs still exists. But they're rarer and less damaging (even with stolen credentials, an attacker can't do as much damage as one who had arbitrary code execution).

                      Rust eliminates many classes of bugs. C++ does not: the fact that theoretically there could be non-buggy C++ libraries doesn't help you out in practice, because there's no way to distinguish those libraries from the very many buggy C++ libraries.

                      > Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

                      It's just the opposite: it makes the risk very visible, so in Rust you can choose to avoid libraries with unsafe. Whereas in C++ any library you might choose is likely to have memory safety bugs and therefore arbitrary code execution vulnerabilities.

                      • pjmlp 1823 days ago
                        Kind of true, AFAIK Rust binary libraries don't expose safety information, like it happens in ClearPath or .NET Assemblies.

                        Still too many libraries make use of unsafe when they could be fully written in safe Rust.

            • pjmlp 1823 days ago
              Rust cannot displace Ada until it fulfills the business and security requirements that keep Ada alive.
          • jstimpfle 1823 days ago
            > Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

            Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

            Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

            Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

            I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.

            • lmm 1823 days ago
              > Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

              This sounds little different from "write good code, don't write bad code." I'm sure we all agree on these things, but I'm sure the people who write terrible code weren't trying to be unclear or trying to overengineer.

              > Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

              True enough, but that's so much easier in a language with sum types.

              > Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

              That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

              > I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.

              Interesting; that's the opposite of my experience. I find modern language features mostly guide us down the path that most of us already agreed was good programming style, enforcing things that were previously only rules of thumb (and that we had to resist the temptation to bend when things got tricky). And so the modern language forces you to solve problems properly rather than hacking a workaround, and the further you scale the more that will help you.

              • jstimpfle 1823 days ago
                >> Eliminate special cases. [...] > True enough, but that's so much easier in a language with sum types.

                These languages make it easier to have more special cases. There's a difference.

                > That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

                I don't see why that should be the case. Aside from the fact that composition/"reuse" is way overrated, libraries can always opt for process- or thread-wide global state. Another possibility would be to have global state per use (store pointer handles), and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles. I absolutely have these handles stored in process global data. For example, Freetype handle, windowing handle, sound card handle, network socket handle, etc.

                Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top. Other than that, they are indeed good choices (as is global data) since lifetime management and data plumbing is a no-brainer.

                > I find modern language features mostly guide us down the path that most of us already agreed was good programming style

                But just the paragraph before you said you didn't agree with mine? In my opinion, OOP, or more specifically, lots of isolated allocations connected by pointers/references, make for hard to follow code since there is so much hiding and indirection even within the same project/maintenance boundaries without benefit. In any case I absolutely agree that this style is not doable in C. You need automated, static or dynamic (runtime) ownership tracking.

                • lmm 1823 days ago
                  > I don't see why that should be the case.

                  At the most basic level, if project A makes use of library B and library C, then you want to be able to verify the behaviour of library B and library C independently and then make use of your conclusions when analysing project A. But if library B and library C use global state then you can't have any confidence that that will work. E.g. if both library B and library C use some other library D that has some global construct, then they will likely interfere with each other.

                  > Another possibility would be to have subproject-wide global state, and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles.

                  At that point you're not using global state in the library, which was the point.

                  > you can always opt for process- or thread-wide global state

                  That doesn't solve the problem at all.

                  > Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top.

                  Indeed, and they're seen as bad practice for the same reason as global state in general.

                  • jstimpfle 1823 days ago
                    > At that point you're not using global state in the library, which was the point.

                    Yes. But I want to make clear that you are still using global state for all uses within the project itself. The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

                    > That doesn't solve the problem at all.

                    WHICH problem? I don't think there is one.

                    > Indeed, and they're seen as bad practice for the same reason as global state in general.

                    This is foolish. There is no problem with global state. Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state. Whenever you have more (e.g. file descriptors, memory mappings, ...), well then you have a set of that thing, but you have ONE set :-). And so on.

                    You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

                    Why add indirection? Why make it hard to iterate over all open file descriptors? Why thread a window handle through 15 layers of function calls when you have only one graphics window? It adds a lot of boilerplate. It even brings some people to invent hard to digest concepts like monads or objects just to make that terrible code manageable. It makes the code unclear. Someone once described it with this analogy, "I don't say ''I'm meeting one of my wives tonight'', unless I have more than one".

                    • lmm 1823 days ago
                      > Yes. But I want to make clear that you are still using global state for all uses within the project itself.

                      But if we believe in using libraries then often our project will itself be a library.

                      > The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

                      And then you have the problem I mentioned: if there is a diamond dependency on your library then the thing using it will break.

                      > WHICH problem? I don't think there is one.

                      The problem of not being able to break down your project and understand it piecemeal.

                      > Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state.

                      All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them. Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

                      > You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

                      If you write a program that can only be understood in its entirety, you'll be unable to maintain it once it becomes too big to fit in your head. Writing a thousand isolated functions gives you something much easier to understand and scale.

                      • jstimpfle 1823 days ago
                        > The problem of not being able to break down your project and understand it piecemeal.

                        That's just incredibly untrue. It's FUD spread by OOP and FP zealots.

                        > All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them.

                        Do you want to say that my logging routine is more complex because my windowing handle is stored in a globally accessible place?

                        > Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

                        My advice is to make clear what the data means. Make it simple. Don't put a blanket over what's already hard to grasp.

                        • lmm 1823 days ago
                          > Do you want to say that my logging routine is more complex because my windowing handle is global data?

                          If your logging routine touches your windowing handle that certainly makes it more complex. If I'm meant to know that your logging routine doesn't touch your windowing handle, that's precisely the statement that it isn't global data.

                          • jstimpfle 1823 days ago
                            It is global data, because it can (and should be) used without threading it through 155 functions.

                            In terms of the relational data model, it is global data because there is always one, and only one, of it.

                      • jstimpfle 1823 days ago
                        > But if we believe in using libraries then often our project will itself be a library.

                        How about making the project good first? Let's try to get something done instead of theoretizing.

                        • lmm 1823 days ago
                          You mean start by building something that can be used and tested in isolation, rather than trying to build an enormous system in one go? Isn't that what you've been arguing against?
                          • jstimpfle 1823 days ago
                            No I mean solve the problem "we need to build a program that does what it's required to do" (and no more) before trying to build a library that will cure diseases.
                            • lmm 1823 days ago
                              That's a total non sequitur. Libraries can, and usually should, be much smaller than applications.
                              • jstimpfle 1823 days ago
                                Libraries are much harder than applications because they must work for a large number of applications with diverse requirements. They need to be more abstract, and therein lies the danger.

                                Regarding the size, clearly wrong. It depends a lot on the library. A windowing or font rastering library will be a lot larger than your typical application.

                                And for libraries that are much smaller than the application itself, why bother depending on them? (Anecdote, I heard the Excel team in the 90s had their own compiler).

                    • tomtung 1823 days ago
                      At this point I'm really unsure whether this is trolling or not.
                      • jstimpfle 1823 days ago
                        Just discussing. Why would it be trolling what I do and not what the other guy does?
      • alfalfasprout 1824 days ago
        "which is just about everything useful". This statement is wildly without merit.

        Sure, for the typical user facing application HN readers talk about then C++ can certainly contain vulnerabilities that are worrisome. Many performance critical applications can tolerate vulnerabilities in favor of latency.

        It seems to me that the world of realtime systems including avionics, autonomous control software, trading, machine learning, and more is "not useful" as per your comment. The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

        The industry has moved away from C++ for plenty of these user facing use cases. Codebases like Chrome and Firefox can't just be rewritten in Rust overnight. You can try and rewrite eg; SSL libraries but that has its own host of problems (eg; guaranteeing constant time operations).

        I encourage the people parroting a move away from C++ to really think about what it is that should move and what the pros/cons are. I think you'll find that many of the things at risk (i.e user facing applications) are already on their way to being rewritten in Go/Rust.

        • PudgePacket 1824 days ago
          > The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

          Could you expand on this? It's a pretty strong claim.

          LLVM produces very fast code and is very commonly used for c++ compilation. Rust also has access to the usual low level control suspects, inline asm, manual memory layout & operations, pointer shenanigans etc.

          Benchmarks are never perfect but they show that rust in usually within the ballpark of c++, if not comparable: https://www.reddit.com/r/rust/comments/akluxx/rust_now_on_av....

        • dodobirdlord 1824 days ago
          > The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

          I'm curious if perhaps you're using the word 'performance' here in a way I'm not familiar with, especially given the context of metaprogramming. As far as the usage of 'performance' that I'm familiar with, C++ and Rust come in at about even in benchmarksgame, which matches my experience. The optimization pass of Rust compilation is carried out by LLVM on LLVM IR, so it would be very surprising if it reliably underperformed compiled C++, especially given that the compiler has more freedom to optimize due to more extensive constraints on the language.

          https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

        • fulafel 1824 days ago
          > The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

          Rust has vastly better metaprogramming, and as much low level control, no? And many low-level things are well-defined in Rust, and undefined behaviour or implementation-defined in C++.

          • comex 1824 days ago
            Depends. Some metaprogramming features in C++ are currently ahead of Rust (values as generic parameters, generic associated types, constexpr, etc.), but Rust is ahead in other areas (procedural macros) and is working on parity in the other cases I mentioned. Meanwhile, Rust has none of C++'s legacy cruft, and its typed, trait-based generics are arguably a better foundation for metaprogramming than C++'s "dynamically typed at compile time" template system.
        • rcxdude 1823 days ago
          > The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

          Rust is developed by mozilla because they needed a language they could write a faster browser in. The first rust component in mozilla was a CSS library they had attempted to parallalise twice in C++ (with some of the best C++ programmers) and failed. Rust treats 'can't be as fast as C' as a bug.

        • m_mueller 1824 days ago
          You mention control software as an example. What makes C++ better there? My guess is compiler options for more hardware targets, but is it something else also? Is it really C++ and not C that is most prevalent on embedded systems?
      • AnimalMuppet 1824 days ago
        Well, it shows that there are aspects of C/C++ which are unsafe. But you don't have to use string_view or span, you know...
        • leshow 1824 days ago
          It was presented to show the "just use modern c++" counterargument to discussing the unsafety of c++ isn't a great argument. There are modern parts that are still unsafe.
          • AnimalMuppet 1824 days ago
            Fair enough. But tatersolid seems to be condemning the entire language, which is a step too far for the evidence given.
            • tatersolid 1823 days ago
              40 years of security vulnerabilities in C and C++ code is plenty of evidence to condemn those languages as unfit for most purposes.

              The evidence is overwhelming that it is not possible to write non-trivial C or C++ that is safe in the face of adversarial input. Microsoft, Google, Oracle, Linus, etc. have all tried for decades and failed miserably. All the resources and expertise in the world still results in unsafe software when C and C++ are used.

            • int_19h 1823 days ago
              std::string_view is supposed to be idiomatic C++, though.
      • Gibbon1 1824 days ago
        > needs to run (not walk) away from C/C++ for the majority of use cases.

        I think we need to stop talking about C/C++ as if they are particularly related. My opinion about performance and C is I'll happily give up some of that for better security.

    • wwright 1824 days ago
      The thing is, Rust has tools that are easier to use _and_ have great performance _and_ prevent security and stability mistakes.
      • the_trapper 1824 days ago
        However Rust is single vendor and single implementation, has a much smaller community and ecosystem than C++, is not standardized, and does not support all of the platforms and use cases that C++ does.
        • geofft 1824 days ago
          All of those problems are long-term solved by using more Rust, whereas none of C++'s problems are long-term solved by using more C++.

          (Personally, I don't find single-vendor or lack of standardization a problem in practice, and I've never written C++ for a platform Rust doesn't support.)

          • Vogtinator 1824 days ago
            Both of them can be solved by giving it more time, but C++ is currently way ahead.
            • geofft 1823 days ago
              I'm not sure that's true. Giving C++ 30 years has resulted in the things identified in the article. (In particular, giving auto_ptr 20+ years hasn't resulted in anything that really fixes the problem.) It is not clear to me that it's moving in the right direction, so I don't think more time will help. C++ is definitely ahead in popularity but is neither ahead not obviously aimed in the right direction in safety.

              Giving Rust about ten years has resulted in significant growth in popularity and tooling, including attempts to write new implementations of the language (e.g., mrustc), so given more time and in particular given more production users, it seems reasonable to expect it will figure all those things out.

            • whyever 1823 days ago
              I don't think that C++'s memory safety issues can be solved by giving it more time.
        • dpc_pw 1824 days ago
          Except for stuff like "trusting trust", I find no need for "multiple vendors of Rust toolchains". It only comes handy when the language itself is not truly open source, and is in itself a form of a product.

          Building on that " is not standardized," is not a problem, because one Open Source implementation is de facto the standard. Which I find much better than forever fixing your code, working around incompatibilities, bugs, etc. in compilers from different versions.

          Which leaves "does not support all of the platforms and use cases that C++ does" which is indeed true.

          • nimrody 1824 days ago
            Sometimes different vendors provide some benefits. For example Intel's C++ compiler produces (or used to produce?) much more efficient numerical code than either gcc or clang.

            So for numerical applications C++ may make more sense than Rust. Rust does have the advantage of being based on an LLVM backend. So perhaps different vendors can compete by writing more efficient backends that are applicable to both C++ and Rust (but you probably lose some information when skipping the compiler front end)

            • madisfun 1823 days ago
              > For example Intel's C++ compiler produces (or used to produce?) much more efficient numerical code than either gcc or clang.

              I'm not an expert, but I believe that Intel could have implemented their hardware-specific optimizations in any other compiler framework (either gcc or clang). In this case multiple language implementations, while commercially viable, are not beneficial to all users.

          • m_mueller 1824 days ago
            Use cases and compiler options go hand in hand. Every implementation is a trade-off and different fields demand different trade-offs.
        • _bxg1 1824 days ago
          But a community and an ecosystem can be built over time (and are being built for Rust incredibly rapidly). Whereas a problematic language can't really be "fixed", it can only be added to.
        • phkahler 1824 days ago
          Fair points, but none of them are inherent to the language itself.
          • paulddraper 1824 days ago
            The thing is....people don't run "the language itself".
        • wwright 1824 days ago
          That’s very true. But all of those communities, ecosystems, standards, and use cases have an extreme learning curve and a very deep problem with security. :-)
          • the_trapper 1824 days ago
            Rust's learning curve isn't exactly a shallow one either.

            For the record I think Rust has a lot going for it, but it is not the C++ killer that many are touting it to be.

            • swiftcoder 1824 days ago
              It's a bona fide C++ killer for applications that are both security and performance critical. It's already gaining traction for those applications even within relatively conservative engineering organisations.

              That said, there are many performance-critical applications who are not security-critical, and in those I'd expect C/C++ to persist pretty much indefinitely. And many security-critical applications which are not performance-critical, and can perfectly well be served by garbage collected languages like Java/C#/Go.

              • galangalalgol 1823 days ago
                Cargo is the problem for those organizations. People who worry about security and safety often develop on airgapped networks. You can go nostd for small stuff. For bigger stuff you could mirror crates.io but that isn't a well supported workflow and it's a lot o code from a lot of randos. The notion of a blessed subset would help get more buy in from that community. Even still, rustup isn't working on airgapped Dev nets and it's a nice feature especially if you are crosscompiling.
                • 0815test 1823 days ago
                  Cargo now supports airgapped use (no crates.io, no github) since the latest release.
                  • galangalalgol 1823 days ago
                    Awesome! Can you provide some documentation to get me started? I have been unable to find any.
                • swiftcoder 1823 days ago
                  Thankfully Cargo is an optional component. We've replaced Cargo for internal use (all dependencies checked into the monorepo and compiled with Buck).
              • ncmncm 1824 days ago
                It is a legitimate C killer. C++, not so much.
                • rcxdude 1823 days ago
                  people who liked C (and didn't like C++) are more likely to move to go. Rust has a healthy community of ex and current C++ programmers.
                  • ncmncm 1823 days ago
                    People who don't like GC will not move to Go.
                    • majewsky 1823 days ago
                      I don't like GC (on an ideological level), but I still write most stuff in Go because I'm so insanely productive in it. Will probably use more Rust once async/await is there and mature enough.
            • arcticbull 1824 days ago
              C++ has a huge learning curve too though, the difference is it lets you write whatever you want. The learning curve is to write correct C++. It’s deceptive, it’s like skiing vs snowboarding. Skiing you pick up fast but to get good is damn hard and few bother. Snowboarding is damn hard to pick up but then it’s pretty easy to become really good.
              • dagw 1824 days ago
                then it’s pretty easy to become really good.

                To modify this I'd say that becoming reasonably good is pretty easy (and I'd agree easier than skiing). To be become really good takes a long time and a lot of dedication and the difference in difficulty between skiing and snowboarding disappears. Same as with programming, some languages make it easier to go from 0 to your first app, some make it easier to write solid production ready code that earns you a paycheck, but becoming really good is always hard and independent of the language you're using.

              • pjmlp 1824 days ago
                I guess that is why I enjoy Snowboard and never bothered with skiing. :)
            • 0815test 1824 days ago
              > Rust's learning curve isn't exactly a shallow one either.

              I don't know how people can be so sure of this. We know essentially nothing about how to teach or learn Rust effectively, it's something that the community is just starting to look at. However, one thing we do know is that the detailed support that the Rust compiler provides to the novice programmer is quite simply unparalleled in other mainstream languages. It's basically the ultimate T.A.

              • wwright 1823 days ago
                I’m not sure I’d use as strong language as you (though I personally love the rust compiler’s messages), but I will say it’s gotta count that it doesn’t automatically and silently generate instance methods that explicitly break the memory model (cough rule of three…)
        • otikik 1824 days ago
          I am going to postulate here that a language standard which includes undefined behaviour is not really a standard.
      • likpok 1824 days ago
        Does rust have a structure to handle something like a stringview?
    • mannykannot 1824 days ago
      std::auto_ptr was fixed one or two times before being replaced. It is a little bit unsettling to see newer features having the same sort of caveats, despite there being a lot of smart people planning the future of the language. I imagine this is due to the way that the existing features combine combinatorially to multiply the complexity of every new feature.
  • pjmlp 1824 days ago
    It is true, C++ has several warts some of them caused by the copy-paste compatibility with C.

    Which is both a blessing and a curse. A blessing as it allowed us Pascal/Ada/Modula refugees never to deal with what was already outdated, unsafe language by the early 90's.

    But also makes it relatively hard to write safe code when we cannot prevent team members, or third party libraries, to use Cisms on their code.

    Regarding the alternatives, Swift is definitly not an option outside Apple platforms. And even there, Apple still focus on C++ for IO Kit, Metal and LLVM based tooling.

    Rust, yes. Some day it might be, specially now with Google, Microsoft, Amazon, Dropbox,... adopting it across their stacks.

    However for many of us it still doesn't cover the use cases we use C++ for, so it is not like I will impose myself, the team and customers, a productivity pain, take the double amount of time that it takes to write a COM component or native bindings in C++ for .NET consumption just to feel good.

    When we get Visual Rust, with mixed mode debugging, Blend integration and a COM/UWP language projection for Rust, then yeah.

    • masklinn 1824 days ago
      > It is true, C++ has several warts some of them caused by the copy-paste compatibility with C.

      I mean that's a bit of a cop-out given C++ has more non-C warts and UBs than it has C warts and UBs at this point. It's not just "copy-paste compatibility with C" which made std::unique_ptr or std::optional deref and UB.

      • pjmlp 1824 days ago
        Sure it is, because they need to be compatible with C pointer semantics.

        The large majority of C++ UB comes from compatibility with ISO C UB 200+ documented cases.

        And ISO C++ working group is trying to reduce the amount of UB in ISO C++, which is exactly the opposite of ISO C 2X ongoing proposals.

        • masklinn 1824 days ago
          > Sure it is, because they need to be compatible with C pointer semantics.

          They don't need to be compatible with unsafe / UB C pointer semantics, allowing them to both contain garbage and be deref'able were explicit decisions the C++ committees did not have to make but chose to.

          • pjmlp 1824 days ago
            Some people prefer a Python 2/3 community schism, others prefer that tools actually get adopted in spite of a few transition flaws.
  • Animats 1824 days ago
    The C++ people are trying to refit ownership to the language without adding a borrow checker. This is painful. They've made it possible to write code that expresses ownership, but they can't catch all the places where the abstraction leaks.

    string_view is really a non-mutable borrow. But the compiler does not know this.

    • pjmlp 1824 days ago
      Not quite true, Google and Microsoft are precisely adding a borrow checker to their static analysis tools.

      https://herbsutter.com/2018/09/20/lifetime-profile-v1-0-post...

    • fooker 1824 days ago
      > but they can't catch all the places where the abstraction leaks.

      Why does static analysis not work here?

      • kllrnohj 1824 days ago
        It does, it's just a warning and not an error. And also experimental.

        But it does exist, and does catch some of these errors. Example: https://godbolt.org/z/CZTfSx

        • humanrebar 1823 days ago
          I'd rather run diagnostics as a separate CI pass, so warnings work for me perfectly.
    • int_19h 1823 days ago
      It's still strictly better than a language with no borrow checker and no way to express ownership (other than comments), like C, or C++ itself before all the smart pointers.
  • raphlinus 1824 days ago
    From the article:

    > Dereferencing a nullptr gives a segfault (which is not a security issue, except in older kernels).

    I know a lot of people make that assumption, and compilers used to work that way pretty reliably, but I'm pretty confident it's not true. With undefined behavior, anything is possible.

    • _wmd 1824 days ago
      Linux hit a related situation: a harmless null pointer dereference was treated by GCC as a signal that a subsequent isnull test could not be true, causing the test to be optimized away. https://lwn.net/Articles/575563/
      • mjevans 1824 days ago
        My opinion on that, is that such code MUST NOT be optimized away. Instead it should be a compile error.
        • raphlinus 1824 days ago
          You might wish for that, but the ship has sailed. Undefined behavior means that the implementation can do whatever it can. That said, I do expect tools, both sanitizers and static analyzers to improve to detect more of these kinds of cases.
          • lmm 1824 days ago
            The original intention of standardization was that compilers would gradually reach consensus on what the behaviour in certain cases should be, and once that happened the standard would be updated to standardize that behaivour. Compilers are allowed - indeed encouraged - to provide additional guarantees beyond the minimum defined in the standard (indeed part of the point of UB is that a compiler is allowed to specify what the behaviour is in that case).
          • umanwizard 1824 days ago
            Well, not exactly. There are things that are UB according to the standard but that particular compilers give an option to make defined: see `-fwrapv`, for example.
          • AnimalMuppet 1824 days ago
            There have been static analyzers that will detect this for years. They report "check for null after use" or some such.
        • umanwizard 1824 days ago
          The problem, as far as I understand it (though I’m a layman), is that by the time the dead code optimization pass runs, the code has been transformed so much that there’s no obvious way for the compiler to tell the difference between “obvious programmer-intended null check that we shouldn’t optimize out” and “spurious dead code introduced by macro expansion” or (in C++) “by template instantiation”.
          • mjevans 1824 days ago
            Couldn't user defined branches be tagged by such a compiler and if a tagged branch is eliminated the error generated with a reference to the tagged line in question?
            • umanwizard 1824 days ago
              That is a good idea and I’ll admit that I’m not sure why it isn’t implemented.
        • int_19h 1823 days ago
          Why should it be a compile error? The pointer may be null, but is not guaranteed to be.

          If you mean that C++ should require a null check before dereferencing any pointer that is not guaranteed to be non-null, then that would break most existing C++ code out there, so it's a non-starter.

          • leetcrew 1822 days ago
            in the particular situation they're talking about, you have a pointer to a struct, which you dereference by accessing one of its fields. the null check happens after the dereference, almost certainly a mistake.
    • kccqzy 1824 days ago
      Absolutely. In many experience if clang can deduce a function will definitely trigger UB such as definitely dereferencing a null pointer, it generally optimizes the entire function after the reference into a single ud2 instruction. (Which raises the #UD exception in the CPU).

      This is something really hardwired into the C and C++ language. Even if the underlying operating system perfectly supports dereferencing null pointers, compilers will always treat them as undefined behavior. (In Linux root can mmap a page of memory at address 0, and certain linker options can cause the linker to place the text section starting at address 0 as well.)

    • tedunangst 1824 days ago
      The irony is it's mostly unsafe if you test for the null, such that the compiler can omit a test, but if there's no evidence the pointer can be null you just get a normal memory access. The optimizer is not optimized for most intuitive behavior.
      • hermitdev 1824 days ago
        The null checks are only optimized away if you've already derefenced the pointer before the null check within a scope. Optimizer rationale being youve already derefenced it, so it must not be null, therefore the null check is unnecessary.

        Also, you can "safely" dereference nullptr, just so long as you dont attempt to actually access the memory. C++ references are nothing more than a fancy pointer with syntactic sugar.

        For example: int* foo = nullptr; int& bar = *foo; // no blow up std::cout << bar << std::end; // blowup here

        My personal $0.02 is that the C++ standard falls short with language like "undefined/unspecified behavior, no diagnostic required." A lot of problems could be prevented if diagnostics (read: warnings) were required, assuming devs pay attention to the warnings, which doesnt always happen. For example: Google ProtoBuf has chosen to ignore at their own and clients' peril potential over/underflow errors and vulnerabilities by ignoring signed/unsigned comparison warnings.

        • tomjakubowski 1824 days ago
          Dereferencing a null pointer to convert it to a reference causes undefined behavior, there's nothing safe about it!

          "Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior."

        • MiroF 1824 days ago
          UB isn't "safe" so I'm unsure what your comment is getting at
          • hermitdev 1824 days ago
            I guess the point I was trying to make is that what is referred to colloquially as dereferencing is different than how the compiler sees it. We see "foo" (can't get HN to emit the asterisk for pointer dereference here, no matter what I try ), and we know that to be UB, but the compiler doesnt really see it until the load. Until its actually used, its effectively a dead store and will be eliminated, anyway.

                int& bar = *foo;
            
            Doesnt actually deference foo. No load is issued from the address stored by foo. Until you either load or store using bar, no null dereference has occurred.

            Further if bar is never used, no actual dereference has occurred. In fact, there will be no assembly instructions emitted for the above statement because it is pure syntactic sugar. Pointers and references in C++ are the same, except with different syntax and the secret club handshake that references are presumed to never be null (but there are ways they can become null and thus the UB).

            Edit: formatting, at least attempted

            • MiroF 1823 days ago
              The problem is that we don't know what the compiler might think..

              If I write something along the lines of

                int& bar = *foo
              
                if(!foo) {
                  // do something
                }
              
              
              The compiler very well might (and would be perfectly within its rights to) completely eliminate everything inside of the if(!foo) since it can assume the pointer is non-null because it is being dereferenced.
            • ncmncm 1824 days ago
              This is very definitely false. That is totally UB, launch-the-missiles stuff. Check your references before you repeat this silliness.
    • kevin_thibedeau 1824 days ago
      Definitely not true. Consider an IoT device without an MMU.
      • AnimalMuppet 1824 days ago
        Most of the ones of those I am familiar with had 0 as a non-writable address, so you'd still crash. [Edit: Though that's probably hardware specific, and the hardware was usually custom.] It might be called "bus error" or some such instead of "segfault", but it was pretty much the same behavior.
        • kevin_thibedeau 1823 days ago
          Plenty of microcontrollers have a vector table at address 0. Best place to start injecting code.
          • AnimalMuppet 1823 days ago
            Sure. The 68000 series did. But address 0 held the starting program counter, and address 4 held the starting stack pointer (or vice versa - it's been a while). Those two were usually mapped to ROM, because they had to have the right values even on cold boot. But that also meant that they weren't writable. So if you had a null pointer, you could read through it, but an attempt to write through it would give you a bus error.
  • jclay 1824 days ago
    I really don't get all the hate that C++ gets. The suggested alternatives in the article are Rust and Swift. What if you need to develop a cross platform GUI, that has a backend running a CUDA or OpenCL algorithm? For the former, you can use Qt, which isn't without it's warts, but is pretty tried and true in my experience (see KDE, VTK, etc). For the latter, you'll end up writing your CUDA code in C++ anyways. I guess you could go the route of writing bindings, but that is not without additional effort. Not that it won't happen for Rust, but C++ also has tooling suited for enterprise use that are largely unmatched in other languages (Visual Studio, Qt, etc). Sandboxing, static analysis, and fuzzing tools are also mostly built for C/C++ codebases. It's also an ISO standard language which makes it both a language full of warts due to decision by committee, but also a candidate for a stable, long-lasting language that will outlive many of us. (Try finding an ISO Standard Language you don't hate).

    Either way, C++ is certainly not for every project, but the articles scattered around the web claiming it should be superseded by Rust are plentiful. These opinion pieces make no attempt to credit C++ for when it does make sense to use. Despite it's quirks, it is still the most optimal way to program HPC applications or cross platform GUIs that are not Electron based. The security tools around it and the fact that it's an ISO standard language make it a solid choice for many enterprises.

    • mannykannot 1824 days ago
      I do not think it helps to think in emotional terms such as 'hate'. There is nothing wrong with discussing potential problems, and the current utility of the language should not stop us asking whether we could do better in future.

      FWIW, I use C++, not Rust or Swift, and I have a fair amount of knowledge and experience vested in it, but I think these questions are worth asking.

      • Rexxar 1824 days ago
        > I do not think it helps to think in emotional terms such as 'hate'

        I think 'hate' really represent the mind of some people (even if they are a minority) but even if we ignore this extreme, the level of irrationality in technical discussions is generally quite high. You need to have rational people to have a rational discussion. The sad reality is that a lot of technical discussions are only superficially rational and are often a political play to assert superiority on other people (it's true for languages, frameworks, code editors, methodologies, etc ... ).

      • blub 1824 days ago
        The questions are worth asking. But the Rust crowd is not asking questions, they're dictating solutions, or rather that one old solution of rewriting everything to Rust.

        Meanwhile the Firefox rewrite, the premium example of what they propose is still plodding along and Mozilla PR blogs aside, Firefox is still plugging vulnerabilities in each release and will be for the foreseeable future.

        Now let's look at the Swift community... do we have blog posts from them every week about how awesome Swift is and why one should rewrite their working C and C++ code in Swift? No, they keep doing their thing, Swift is becoming better at cross platform, it's also getting some support for machine learning.

        That's how one grows a language, through building successful projects, staying positive (and having an entire platform behind it). Not through doomsday scenarios and a constant barrage of criticism.

        • 0815test 1824 days ago
          > That's how one grows a language, through building successful projects, staying positive (and having an entire platform behind it). Not through doomsday scenarios and a constant barrage of criticism.

          This is exactly what the Rust community is doing! RIIR is something that's only really insisted on for relatively small pieces of security-critical code. With huge codebases like Firefox the rewrite is done piecemeal, to put the rewritten code in use as quickly as possible. The "doomsday scenario" talk about memory-unsafe languages does not come from people writing Rust, it mostly comes from the security community, even at places like Microsoft - because guess what, they've literally been running around with their hair on fire for decades, and they're sick of this especially now that something like Rust is available!

      • dejaime 1822 days ago
        Saying that C++ won't "save" us is already pretty emotional and, put simply, wrong. We are not really facing imminent doom or anything that would justify that word, other than an overemotional point of view, biased by personal feelings.
    • badsectoracula 1824 days ago
      C++ does have its positives, as you mentioned, but those positives do not make its negatives go away, nor having negatives means that there aren't positives. You can dislike some parts of the language while still using it for its positive aspect - that doesn't mean the negative parts do not exist nor mentioning them means that there are no positives.
      • blub 1824 days ago
        Once again, this is not merely mentioning negatives, it's just more submarine advertising for Rust.
        • int_19h 1823 days ago
          Rust is the only serious attempt to fix those negatives while remaining in the same niche, so bringing it up in this context is natural.

          And C++ can't really truly fix them without breaking backwards compatibility with all the legacy C++ and C code, which is its main selling point.

          • blub 1823 days ago
            It's the only "serious" attempt as declared by whom exactly, the committee of serious attempts?

            There are other serious attempts (D, Swift, Go) which the Rust community likes to dismiss for various reasons, but at least two of them are currently much more successful than Rust. They don't have to be 100% in the same niche to take a bite of marketshare.

            Even if C++ breaks backwards compatibility in some ways, it will still have better backwards compatibility to itself and C than Rust or any other language. This break could be something as radical as a C++ "unsafe", or it could be clang's -Wlifetimes, or something else. Credit's due to Rust here for pushing some parts of the C++ community to search for solutions.

            • int_19h 1823 days ago
              I do not dispute that the languages that you've listed are serious attempts. They do not remain in the same niche, however. I would define that niche as "capable of replacing C even in free-standing implementation".

              For D and Go, having a GC immediately puts them outside of that niche. For Go, I would also add all the FFI weirdness due to its weird stack discipline, which means that it is non-zero-overhead when interacting with non-Go code - a fatal omission for any contender for a low-level systems language.

              Swift is much closer to the metal, and I would consider it a serious contender if it was pushed on all platforms. But it seems that Apple is not interested in its use outside of their ecosystem, which constrains its effective niche to be much narrower than C++ or Rust, ironically.

              And yes, of course C++ is always going to have better backwards compatibility. If it didn't, it wouldn't be C++. But its ability to fix issues is directly correlated with that compatibility - it's a dial where you can have more of one and less of the other, as you choose, but you can't have both. Rust (and Swift) can fix more problems, or can fix problems in better ways, because they are not so constrained.

              Conversely, if C++ were to introduce safe-by-default, and require explicit opt-in into unsafe - with all present code being considered unsafe - then what you have is a new language that just happens to embed C++ for compatibility reasons. At that point you might as well fix the syntax warts etc as well in that new safe language, since it breaks everything anyway.

    • magila 1824 days ago
      While there are obviously still cases where C++ makes sense to use today, those case are overwhelmingly based on the age and maturity of the C++ ecosystem. Now that Rust has proven that a language can provide memory safety without compromising (much) on performance, it is clear that the scope of C++'s supremacy is in permanent decline.

      As Rust (or another language with similar safety/performance properties) matures and its ecosystem grows, C++ will increasingly become a language of tiny niches and legacy codebases.

      In other words: C++ is the new Fortran.

      • ajross 1824 days ago
        > In other words: C++ is the new Fortran.

        Which makes Rust the new... APL?

        I think the analogy is pretty apt as far as it goes. Fortran by the 70's was a crufty language with a bunch of legacy mistakes that remained very popular and very useful and would continue to see active use for decades to come.

        And everyone knew that. And everyone had their own idea about the great new language that was "clearly" going to replace Fortran. And pretty much everyone was wrong. The language that did (C) was one no one saw coming and frankly one that didn't even try to fix a lot of the stuff that everyone was sure was broken.

        For myself, I despair that Rust has already jumped the proverbial shark. It's complexity is just too severe, the only people who really love Rust are the people writing Rust libraries and toolchains, and not the workaday hackers who are needed to turn it into a truly successful platform.

        • arcticbull 1824 days ago
          It’s definitely easier to reason about than C++ because it errs on the side of safety and explicitness. You can use things you don’t understand without fear which straddles the boundary in a good way IMO. To your point that doesn’t make it simple.

          As a work-a-day hacker it’s completely become my go to language when I’m writing tools, libraries or just want to knock out a simple algorithm to prove myself right or wrong.

          • ajross 1824 days ago
            > It’s definitely easier to reason about than C++

            See... I don't think that's true, and argue the huge body of C++ code and talent in the ecosystem is an existence proof to the contrary.

            I mean, sure, C++ has its crazy edge cases and its odd notions. But you don't need to understand the vagaries of undefined behavior, or the RVO, or move semantics to write and deploy perfectly sensible code. Literally hundreds of thousands of people are doing this every day.

            Now, that may not be a convincing argument about the value of that code. But it's absolutely an argument about the utility of the language in aggregate.

            I'll be frank: probably 40% of professional C++ programmers aren't going to be able to pick up Rust and be productive in it, at all. And at the end of the day a language for The Elite isn't really going to mean much. We've had plenty of those. Rust is the new APL, like I said.

            • arcticbull 1824 days ago
              I'm not really arguing about the value of that code either, just that it's probably wrong, probably trivially breakable due to the sheer mountain of complexity underlying it. The compiler just happened to let it through because it can't help you. The language doesn't give it enough information to do so effectively.

              Just off the top of my head, std::move doesn't... move [1]. It just returns an, I kid you not "static_cast<typename remove_reference<T>::type&&>(t)" without doing... anything. You can keep on using the old value probably, silently, until out of the blue, it stops working one day. Then you're super, duper sad. Even modern language features are, I don't want to say lies, but "hopes and dreams" the compiler can't enforce. It's like if you really wished C++ had Rust's features, but you can't without breaking things, so you give it your best shot, which ends up just creating yet more complexity.

              Rust's answer is...

                let x = Value::new();
                let y = x;
                let z = x; (COMPILER ERROR: X GOT MOVED INTO Y)
              
              C++'s modern features are to Rust's equivalents what the ruined fresco [2] was to the original. If you stand far enough back, it's basically right. If you get up close it's hilariously and trivially broken.

              It takes a lot of gymnastics to call this language approachable or understandable. It's basically a coal powered car made of foot-guns. That doesn't mean it's not a car, or that it won't get you where you're trying to go, I'm just saying it's an open question how many pieces you'll arrive in.

              [1] http://yacoder.guru/blog/2015/03/14/cpp-curiosities-std-move...

              [2] https://www.npr.org/sections/thetwo-way/2012/09/20/161466361...

              • masklinn 1824 days ago
                > Just off the top of my head, std::move doesn't... move [1].

                I mean that makes sense in a (somewhat nonsensical) way, std::move is a marker for "you can move this thing if you want".

                The much weirder part is that even if a value is moved it's not moved, it's carved out, you get to keep a shell value in a "valid but unspecified state". Reusing that value (which the compiler won't prevent) may or may not be UB depending on the exact nature of the operation and state.

                Oh and of course that a change / override to the caller and recompile can change the behaviour of the callsite entirely (e.g. a previously moved value is not moved anymore, or the other way around) but that's pretty common for C++.

            • estebank 1824 days ago
              I'm curious, what has given you the impression that Rust is a language "for the elite"? It certainly has some rough edges around learnability, but I don't think anyone is actively trying to discourage people from picking it up. Rust is certainly hard for experienced programmers because some common patterns in other languages are not allowed by Rust's rules, but that's no different than trying to apply OOP in a functional language.
            • leshow 1824 days ago
              > But you don't need to understand the vagaries of undefined behavior, or the RVO, or move semantics to write and deploy perfectly sensible code.

              Don't you kind of have to though? These invariants are in your code no matter what, in the case of Rust they are checked by the compiler (you don't necessarily have to understand every nuance, because it's checked for you), in C++ they aren't checked and are a potential bomb waiting to go off.

            • lpghatguy 1824 days ago
              Claiming that Rust is a language for 'The Elite' is amusing in light of the recent Rust website redesign, with the following headline[1]:

              > Empowering everyone to build reliable and efficient software.

              The language is entirely about inclusion, empowerment, and removing the fear of systems development.

              [1]: https://www.rust-lang.org/

              • arcticbull 1824 days ago
                Perception always lags reality, and it definitely was hard to learn when I picked it up 3-4 years ago. Things have improved so much since then, with non-lexical lifetimes in R2018, better compiler errors, stdlib standardization, RLS + VSCode, etc.
            • 4thaccount 1824 days ago
              "Rust is the new APL,like I said"

              I know you're trying to compare it to APL as that language mostly died off and is thus obscure, but I think the analogy is a little off.

              While APL is weird, it is actually really easy for me (someone with less than 15 hours playing with the language in total over the past few years) to code up some basic scripts a lot easier than something like C++.

              I'm being absolutely serious too. C++ is pretty low level and as a Python coder I feel like I'm sinking in quicksand with everything required to do something simple. APL is basically built around passing arrays of numbers or strings to weird symbols that operate on the whole array. This means I can do text processing with only a few symbols and a library function (and all interactively) where C++ requires lots of boilerplate and debuggers and compilation and pointers. In short, APL seems to be a lot less complicated than both Rust and C++ in my opinion and most using it have very little formal programming experience and have no problem picking it up from what I've read.

              I know what you were essentially trying to say though.

              • ajross 1823 days ago
                I picked APL because it matches the "WTF cray cray" aesthetic that Rust's syntax presents to new users. I can see an argument that Ada is the right analogy if you're going for pure complexity.

                And yes, Fortran, APL, Ada (also Modula-2 & Oberon, Smalltalk and a bunch of other forgotten languages presented as the Next Big Thing at the time) are all uniformly simpler than either C++ or Rust. The modern world is a more complicated place and programming tools have kept up.

            • Inityx 1824 days ago
              > the huge body of C++ code and talent in the ecosystem is an existence proof to the contrary

              Looks to me more like proof that C++ has been around a long time

              • ncmncm 1824 days ago
                Lots of languages have been around a long time without attracting billions of lines of code. To get that, the language must be unusually useful.
                • dragonwriter 1824 days ago
                  That's a powerful endorsement of COBOL, but a lot of language success is “when did it become common”, “who was sponsoring it or what libraries did it come bundled with”, and path dependence that makes momentarily (due to transient conditions) sensible choices into standards that are mandatory for decades.
                • int_19h 1823 days ago
                  I don't think anybody is denying that C++ was (and is) unusually useful, simply by virtue of being the only serious game in town when you need that whole "don't pay for what you don't use" thing, and general performance stemming from that. And devising a better replacement that retains that feature is hard, which is why C++ had so much time to entrench.

                  But it doesn't mean that we can't do it better these days.

            • dragonwriter 1824 days ago
              > I'll be frank: probably 40% of professional C++ programmers aren't going to be able to pick up Rust and be productive in it, at all.

              Well, a certain number of professional programmers are past the point of being willing to learn new technologies, so maybe that's true. But close to 100% of the people who might become professional C++ programmers could instead become professional Rust programmers.

        • nunjee 1824 days ago
          happy workaday Rust hacker here. Coming from higher level languages, the semantics make much more sense to me, after a quite harsh learning curve and some un-learning. Non lexical lifetimes is a game changer for rust learnability I think, and I more and more fail to see usecases where I can't just use it.
      • ncmncm 1824 days ago
        If C++ is the new Fortran, Rust might very well be the new Ada. Many of the same relative merits were claimed for Ada as for Rust, and it had the backing of the biggest and best-funded organization in the world, but it faded from view because it did not keep up.

        Rust could easily go the same way.

        • pjmlp 1824 days ago
          Given the market size for Fortran and Ada, that doesn't bound well for Rust.
    • 932 1824 days ago
      Yeah, agreed. The points in the article are valid, but quirks you learn and get past the first time. I still shoot myself in the foot sometimes even though I don't have a single bare new/malloc without a shared/unique ptr! But that's C++ for you.

      But, C/C++ is the best option for us for high-performance network processing. We're dabbling with Rust for small applications where we would use Python previously and it's working pretty well -- but there's no way we could use Rust for the core application yet. Modern C++ has really grown on me and it's sometimes a love/hate relationship but totally a huge improvement over ancient C++ or C.

      • mrbrowning 1824 days ago
        > The points in the article are valid, but quirks you learn and get past the first time. I still shoot myself in the foot sometimes even though I don't have a single bare new/malloc without a shared/unique ptr! But that's C++ for you.

        I think the article maybe doesn't do enough to outline the full extent of the problem by focussing on a few counterintuitive cases that are present in C++17, because you're right, all of those cases in the article are ones that can be learned and remembered without issue. The real problem, as I see it, is actually that the core language semantics mean that there's no foreseeable end to the foot-shooting treadmill. Since the language is fundamentally permissive of such things, it's likely that further spec revisions will introduce abstractions like string_view that are easy to use unsafely, aren't flagged by static analysis tools, and end up in security-critical code.

        Because this feels like a necessary disclaimer, I don't think that fact justifies migrating every active C++ codebase out there to Rust or anything, since pragmatically speaking there are a lot more factors beyond just core language semantics that go into evaluating the best choice of implementation language. I guess my takeaway is neatly expressed by the post title: there's a sense that I get from C++ users (granted, maybe only naive ones) that sticking to the features and abstractions introduced in C++11/14/17/etc basically eliminates all of the potholes of old, and it's evident that that's not true and will probably continue to be not true.

      • sanderjd 1824 days ago
        > but there's no way we could use Rust for the core application yet

        I'd love to hear more information on this! In my mental model, you could just use Go to replace your Python utilities, but Rust might be workable for your core (or at least its designers would like it to be and would like to know why it isn't).

        • 932 1824 days ago
          We are Rust noobs :-). The Python utilities are random tools and daemons, so it's been a nice experience getting my feet wet in a completely different paradigm with rustc.

          So the biggest challenge with moving to another language is reproducing the same low latency and high performance we've carefully designed in C++ to a Rust analogue... which given we haven't really used Rust enough yet to have a total sense of this, is hard.

          In the long future, I can see Rust working in gradual stages -- but of course focusing on biz objectives is better first when we know how to write high perf C++, hence spending time to play with Rust on side tooling or other smaller projects.

          From what little Rust I've written so far I really do like it, so hoping that I can incorporate it more

    • bobajeff 1824 days ago
      >Not that it won't happen for Rust, but C++ also has tooling suited for enterprise use that are largely unmatched in other languages

      Hopefully that stuff will be helped with things like Language Server Protocol and Debug Adapter Protocol.

    • _bxg1 1824 days ago
      But most of the things you just listed are just aspects of the existing ecosystem (libraries, tooling, etc.). There's no doubt C++ has an incredibly large ecosystem and will therefore be around for quite a while to come, but that doesn't make it a good language, it just makes it one that happens to have been very popular for a very long time. Our industry is one that values progress over tradition in the long run. I think C++ has entered its twilight years. That could mean five, ten, or twenty, but I think it's peaked, and I don't think that's a bad thing.
      • jclay 1824 days ago
        The point is that I've noticed a broad "religious" trend where those promoting Rust don't lend any credit to the places where C/C++ has valid strengths, even if due to it's legacy. It doesn't do a great service to either community to constantly pit the two against each other, and to misrepresent the other in a way that's not honest. C++ doesn't exist and continue to evolve just because it's been around forever, there are a number of things leading to it's continued use that should be brought into the discussions.

        C++ isn't going anywhere. In 20 years you may not be writing in it, but you'll still be calling into it somewhere in the software stack (especially if things continue moving the WebAssembly direction).

        Even if you're using Python's SciPy today, you're calling into LAPACK written into Fortran.

        • pjmlp 1824 days ago
          Case in point, most modern OS GUIs are written in managed languages nowadays, even Qt has JavaScript bindings now.

          Yet, C++ is still there as the binding layer between UI and GPGPU.

      • ncmncm 1824 days ago
        That doesn't make C++ a good language. It just is one.

        Rust is also a good language. Trash-talking C++ does no one any good.

        Overwhelmingly, the substantial gains to be made are moving people off of C. Every other possible benefit is a rounding error. It is still much easier to get people to C++. Once dislodged, they might continue on to Rust, or br seduced by C++'s greater expressive power and more powerful libraries. Either way the world will be better.

        C++ today seems weighed down by legacy cruft, compared to Rust, but Rust is rapidly accumulating its own legacy cruft. By the time it is mature it will have easily as much of its own.

        • koffiezet 1823 days ago
          I started came from a C background (mostly for embedded stuff, where on many platforms a C++ compiler wasn't an option) - and eventually moving to C++ for backend and library work was a pain in the ass. I got the hang of it, but never loved it, although for the goals we had to achieve, it was a sensible choice. I however moved into more of an ops/SRE role mainly due to to C++ not entirely being my thing while the dev that was always most in touch with the ops side of thing.

          I do see many of the advantages of both Rust and C++, but one of the reasons I like C is its relative simplicity. I only did a single small thing in Rust to try it out, and while still quite complex, at least it felt more manageable than C++ once you understood the borrow checker. The big elephant in the room however is Go. Every time I started something and would consider Rust, it ended up being 'why not just Go?'.

          At least for me, it was much better suited language to be moving to coming from a C background. The biggest initial hurdle was setting up the dev environment with the completely backward GOPATH and GODIR environment variables - which just feels absolutely wrong (although this now in the process of being addressed). The language itself however was an absolute breeze, I felt right at home. Simple, quick, straight-forward, with tons of libraries and tools for my current field of work, coupled with performance more than acceptable for 99% of the applications I need and static binaries which are easily deployed anywhere, also eliminating a ton of complexity. Is it perfect? No, but what language is? But if you want to convince C-programmers to ditch C for a memory-safe language - Go is imho in many cases a much better option to move to.

        • _bxg1 1824 days ago
          "or be seduced by C++'s greater expressive power"

          There's a deeper debate lying at the heart of the Rust-vs-C++ conversation (it's the same one at the heart of Haskell-vs-Lisp), which is really about expressive freedom vs. the strategic usage of constraints. That debate will, truly, outlive all of us. You can probably guess which side I'm biased towards; I won't lay it all out here.

          • ncmncm 1823 days ago
            This is not about "expressive freedom" vs "constraints". Rust lacks many of C++'s key core language facilities to capture semantics in a library. As a consequence, you cannot write powerful libraries in Rust that you can in C++, and you cannot use powerful libraries such as are written in C++.

            Since you cannot use these powerful libraries, you are (if you like) "constrained" to write fragile code at what would have been the call site.

            Each use of a powerful library eliminates all the bugs that would have come from not using one. Those are bugs that Rust designers have elected to keep, in exchange for the memory-use bugs that we largely eliminate, in C++ code, by reliance on powerful libraries.

            Powerful libraries eliminate many, many more bugs besides memory misuse.

            • int_19h 1823 days ago
              Can you give an example of said "key core language facilities" that make some library implementable in C++, but not Rust?

              I can give an example of the opposite: language-aware macros. No amount of C++ TMP hackery can approach a well-designed Rust DSL.

              • ncmncm 1823 days ago
                Rust macros understand types now? Woohoo!
                • int_19h 1822 days ago
                  Can you clarify what you mean by "understand types"? The only thing I can think of in this context is compile-time reflection - but that's not in C++, either (yet).

                  Ideally, can you give a concrete example of some abstraction that can be implemented in C++ with templates, but not in Rust with generics and/or macros?

            • _bxg1 1823 days ago
              "Rust lacks many of C++'s key core language facilities to capture semantics in a library."

              Can you be more specific? I don't even know what you mean by "powerful library". If you mean a library that's been developed and debugged for a long time, then sure, C++ currently has the advantage there, but that's a transient state and nonspecific to the language itself. If you mean a library that does wild, earth-moving things then I would call that a liability, not an advantage. The language features that allow such things are sources of bugs that Rust designers have elected not to introduce in the first place.

              The question of, "In April 2019, which language and ecosystem are more reliable?" is a perfectly valid one. Stronger language semantics vs decades of library refinement. It's not at all obvious. But the answer to "Is Rust or C++ a better language in the long run?" seems clear to me.

              • ncmncm 1823 days ago
                Could you write a Rust equivalent of the STL, or its modern cousin Ranges? That's just one library (as of C++20), but if Rust is not up to that, there's no point in going further.
                • steveklabnik 1823 days ago
                  The equivalent of Ranges is already in Rust’s standard library, and has been forever. We call it Iterator. It also provides extra static guarantees against invalidation.
                  • ncmncm 1823 days ago
                    I see that you did not understand my question.

                    I see, too, that there are plans for some support for generics in the near future. So the answer might become yes, in time.

                    • steveklabnik 1822 days ago
                      Rust has also had generics for a very long time.
    • shmerl 1824 days ago
      Qt has Rust bindings now. I hope CUDA will get replaced with proper cross GPU alternatives anyway. Rust GPU programming should be also possible.
      • jclay 1824 days ago
        The issue with bindings is that you either commit to maintaining them yourself, or you rely on someone else. In this case, the Rust Qt bindings I found [0] were generated for Qt 5.8, which was released nearly 3 years ago. The tests on Github report "failing".

        Then, you have the cognitive overhead of translating the documentation, and other sample code. In my experience with bindings, this ends up requiring knowledge of the language you're binding to. It seems easier to just write it in the native language instead and deal with those quirks rather than bindings quirks.

        Do the Rust bindings show the Qt docs in the autocomplete? If there's not input validation on the binding side, then you'll end up in C++ again figuring out how to sort things out.

        Regarding CUDA, I think we're all hoping for a cross GPU alternative. There's OpenCL, Sycl, ROCm, Kokkos but their API is also written in (you guessed it) C++. Need to render to OpenGL? You'll be writing in C. Unless one of the companies decides to replace driver interfaces with Rust, any application using them will be dependent on N bindings working.

        You're ultimately not escaping C/C++ for any systems development. You either deal with the complexity of interfacing between language A and C/C++ or just deal with the quirks of C/C++ themselves. Pick your poison.

        0. https://github.com/rust-qt/ritual

        • shmerl 1824 days ago
          There were multiple attempts at such bindings. The most promising is this one:

          https://github.com/KDE/rust-qt-binding-generator

          > You're ultimately not escaping C/C++ for any systems development.

          We eventually should. C/C++ should retire even for drivers and kernels. But for now there is still a lot of baggage to deal with indeed.

      • mokus 1824 days ago
        Rust gpu programming is very possible already, but as of my most recent foray into the area it was still very much a black art getting your build environment set up for it. It’s probably just a matter of time before it becomes an easy thing to do.

        I’ve done a fair bit of mixed Rust + CUDA C++ though, and found it to be a very nice way to build high performance code with safe high-level interfaces that someone can grab and use with little to no understanding of GPU architectures. It’s even pretty straightforward to build wrapper types that leverage Rust’s ownership system to track lifetimes and safe management of device buffers as well (unfortunately I can’t release that code but it really was pretty simple so hopefully someone else will soon do it openly, or by now maybe someone already has)

    • draw_down 1824 days ago
      I don’t see how you can read TFA and describe it as “hate”. The author is showing real problems with C++; you could disagree with the severity of these problems or the solution presented, but calling this hate is so childish and dishonest.
  • 0xDEEPFAC 1824 days ago
    What do you need saving from - Ada has existed for nearly 30 years now ; )
    • ajxs 1824 days ago
      I came here to the comments to post this exact thing, haha. I'm very late to the Ada party, and I'm amazed at how ahead of its time this language was. It's still very usable and modern by today's standards.
      • DoingIsLearning 1824 days ago
        > by today's standards

        You make it sound like Ada stopped in the 80's.

        They don't release standards in rapid succession but 'Ada 2012' has pretty much all of the features that people were asking for in C++ since 2011.

        The only issue (on top of the obvious lack of coolness and hype around it) is that professional grade Ada compilers/toolchain are still quite a high cost for single developers or small companies. AdaCore's business model is still pretty much focused on support contracts to big Aerospace/ATC/Defense clients.

        • 0xDEEPFAC 1824 days ago
          Many of AdaCore's "community" versions are the full compiler and if they dont have builds available for your bareboard arch you can build it yourself or get one from gcc.

          The only difference is that use of the special "GNAT.X" packages outside the standard runtime are under a GPL restriction and you would be required to export those dependencies as a separate lib and do open dev on it.

          Otherwise you are free to sell or keep any trade secrets you want without giving AdaCore anything.

          • DoingIsLearning 1824 days ago
            TIL. I was under the impression they were a lot less permissive outside a commercial license.
        • ajxs 1824 days ago
          My comment was more addressing the general public misconceptions of the language. I showed some of my teammates the work I've been doing in it and they were shocked, expecting the language to look like COBOL. When I reference "today's standards", I was referring to the kinds of modern features people expect from a newer language. I haven't had too many problems using the community version of Gnat made by AdaCore, but I share many people's concerns with the main toolchain being developed by a private enterprise who commercialise it. The version of GCC bundled with Fedora Linux IIRC had an Ada toolchain out of the box too. The lack of runtime support for a wide range of architectures/boards was a bit of a bummer, but that's just a function of it's popularity I guess.
  • jasonhansel 1824 days ago
    Can someone at least make a linter that ensures you only use a "safe" subset of C++?
    • pfultz2 1824 days ago
      Clang's lifetime profile will catch the first example:

          <source>:8:16: warning: passing a dangling pointer as argument [-Wlifetime]
            std::cout << sv;
                         ^
      
          <source>:7:38: note: temporary was destroyed at the end of the full expression
            std::string_view sv = s + "World\n";
                                               ^
      
      And cppcheck will catch the second example:

          <source>:7:12: warning: Returning lambda that captures local variable 'x' that will be invalid when returning. [returnDanglingLifetime]
              return [&]() { return *x; };
                     ^
          <source>:7:28: note: Lambda captures variable by reference here.
              return [&]() { return *x; };
                                     ^
          <source>:6:49: note: Variable created here.
          std::function<int(void)> f(std::shared_ptr<int> x) {
                                                          ^
          <source>:7:12: note: Returning lambda that captures local variable 'x' that will be invalid when returning.
              return [&]() { return *x; };
                     ^
      Cppcheck could probably catch all the examples, but it needs to be updated to understand the newer classes in C++.
    • safercplusplus 1824 days ago
      Others disagree, but I suggest that someone could make such a linter. As others point out, the Core Guidelines "lifetime profile checker"[1] is designed to be an advanced static analyzer that restricts how many C++ elements can be used. It's not finished yet, and the current version is not designed to achieve complete memory safety (and doesn't address data race safety). Whether or not subsequent versions could match the full safety enforcement of Rust's compiler seems to be a matter of some debate.

      But there is an alternative/complementary approach, which is to simply avoid potentially unsafe C++ elements, like pointers/references, arrays, std::string_views, std::threads, etc., substituting them with safe, largely compatible replacements[2]. This approach has the benefit that an associated safety-enforcing "linter" would not impose the same kinds of "severe" usage restrictions that the lifetime profile checker (or, say, the Rust compiler) does.

      [1] https://devblogs.microsoft.com/cppblog/lifetime-profile-upda...

      [2] https://github.com/duneroadrunner/SaferCPlusPlus

      edit: grammar

    • steveklabnik 1824 days ago
      The Core Guidelines are an attempt at this, but it’s not fully safe. Safer, which matters! But not safe.

      There isn’t really any useful safe subset of C++. If there were, Rust may never have been created in the first place.

      • pjmlp 1824 days ago
        Still, C++ is good enough for the unsafe low level bindings of a Java/.NET application.

        Until Rust's tooling catches up with C++/CLI, C++/CX, C++/WinRT + .NET or Java + C++ (Eclipse/Netbeans/Oracle Studio), CUDA, Unreal/Unity, GLSL/HLSL/Metal Shaders allow for, it will stay as a safe way to write CLI apps and a couple of UNIX libs.

        I like the language and advocate it often, but I am also very pragmatic regarding the areas I and customers work on.

      • ncmncm 1824 days ago
        This is not, in fact, the case. Back when Rust was begun, it had some merit, but C++ is a rapidly moving target.

        C++ still does not have an absolutely safe subset, but it has a safe-enough subset, and plenty of other merits that will ensure its continued competitiveness.

        Rust will continue improving, too, and someday may be as expressive as C++ is today, or perhaps even as expressive as C++ is then. That will be a good day, although by then some other language will be on the rise, its users hoping to displace C++ and, given enough luck and hard work, Rust.

        V could be interesting.

    • 0815test 1824 days ago
      The answer is essentially no, at least if you're seeking substantial levels of assurance or safety. Even the C++ Core Guidelines effort, https://github.com/isocpp/CppCoreGuidelines which is the closest thing to what you describe and is driven by influential members of the ISO C++ community including B. Stroustrup, does not claim that they'll be able to make C++ memory safe.
  • User23 1824 days ago
    For systems programming languages, safe by default with scoped unsafe code is a Pareto improvement on unsafe everywhere.
    • pjmlp 1824 days ago
      A feature that exists since 1961, across several systems languages.
      • kiriakasis 1824 days ago
        I never understood this type of comments, is what you are trying to say something like:

        "It was already tried and failed, why is this time better"

        "Mainstream languages always end up not using it"

        "People should reference more the original works of the past"

        ...

        One of the many explainations of the name Rust is that it represents a collection of old ideas. What was the point you were trying to convey in specific?

        • pjmlp 1823 days ago
          People should reference more the original works of the past instead of rediscovering them
          • kaens 1823 days ago
            I think more rediscoveries may be references than you suspect, but in the cases that are rediscoveries there is a bit of a knowledge and discoverability issue for PL features for people who aren't PL nerds already.

            I'd love it if more people had a more solid understanding of the ideaspaces that have been covered in the PL landscape, but considering that most common paths to working in software (and even to creating and contributing to langs) don't involve needing to know PL history I'm not sure how to get there from here.

            If you have resources you think people should be utilizing here, please speak up.

            • pjmlp 1823 days ago
              I keep posting them here.

              How I got to learn about them?

              Having a solid Informatics Engineering degree, with focus on systems programming, graphics and compilers, and a very nice university library.

              That was it, we had to hunt for books, compuserve, gopher and BBS were still a thing.

              Nowadays learning about the history of PL is a google/bing/... search away, a couple of seconds with access to plenty of scanned papers and conference proceddings since the early 60's, so one has to be quite lazy not to research them.

  • jmole 1824 days ago
    Question - How does one write microcontroller code (or other memory-mapped I/O code) using a memory-safe language?
    • clouddrover 1824 days ago
      • hu3 1824 days ago
        That's not memory safe as the book itself states:

        > Now, the volatile accesses are performed automatically through the read and write methods. It's still unsafe to perform writes, but to be fair, hardware is a bunch of mutable state and there's no way for the compiler to know whether these writes are actually safe, so this is a good default position.

        https://github.com/rust-embedded/book/blob/9c05a419fc2ad231c...

        • clouddrover 1824 days ago
          I think you need to read on a bit further. The very next section after the sentence you quoted is:

          "We need to wrap this struct up into a higher-layer API that is safe for our users to call. As the driver author, we manually verify the unsafe code is correct, and then present a safe API for our users so they don't have to worry about it (provided they trust us to get it right!)."

          Rust lets you write clearly defined unsafe code blocks. It's not a bug, it's a feature:

          https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html

          https://doc.rust-lang.org/nomicon/index.html

          It's worth reading through the whole Embedded Rust book even if you won't be implementing such software. It's interesting.

          • hu3 1823 days ago
            Don't assume I haven't read it. I read the book.

            Wrapping unsafe code behind an API doesn't make it go away. It's still unsafe and needs manual checking.

            Also wrapping code with hints and annotations of some sort is a thing in numerous PL. It's a given, not a feature and certainly not a bug as you wrongly implied I stated.

            TLDR: it's still unsafe as the book and others pointed out.

            • clouddrover 1823 days ago
              Then you're misrepresenting what the book is saying, which is worse than not having read it.

              As an argument your position makes no sense. You might as well be arguing that there's no point in programming languages because sometimes you have to write assembly.

              • hu3 1821 days ago
                Since you replied with a personal attack and added nothing to the discussion:

                It's still unsafe and if you think otherwise either provide an argument that refutes me, the book and othets on this thread instead of blindly dismissing us as: you're reading it wrong.

                Your very copy-pasted paragraph from the book stated that's still unsafe and it's left to the programmer to get it right.

    • tedunangst 1824 days ago
      By calling unsafe code. :) The semantics and guarantees offered by the interface vary, but that's the short version.
      • Nelson69 1824 days ago
        I have over 10 years of professional C++ and helped teach a course on it in college. One of my observations, and I may be off, is that a fair number of C++ users tend to take on a kind of macho attitude about it: it's a hammer for every nail, if you make certain mistakes you shouldn't use it or maybe be programming at all, garbage collection and other safety apparatus are kind of like training wheels while the "big boys" don't need that sort of thing.

        I'm being a little snarky here, but if you are truly a macho developer, then crapping out that unsafe code in optimized assembly or C or something is a really really easy time to show off and shouldn't be such a big deal for such a seasoned developer. Instead, the question is always raised in this more insecure way: for that tiny percentage of the time you have to do some bit-banging (and it's usually pretty small and encapsulated on most embedded projects) you had might as well do the whole thing in C or C++.

        • hermitdev 1824 days ago
          I've over 15 years professional experience in C++, around 16 years in C#, and around 14 in Python, all overlapping. For me, it's about using the right tool for the job.

          It's not being a macho developer, but sometimes you need the hammer that C++ is, and you'll probably hit your thumb a few times using it.

      • User23 1824 days ago
        Often that unsafe code is assembly language.
        • AnimalMuppet 1824 days ago
          Been there, done that. We were using Pascal (standard Pascal, no Turbo extensions) on a machine with memory mapped I/O. This means that we had to drop into assembly to write to the I/O registers. And that meant that we lost all type checking for those functions. We couldn't even check the number of parameters! Unsurprisingly, that led to a crash.

          That is: The extreme safety of Pascal led to a crash (when we had to do something that Pascal, in its wisdom, said we weren't allowed to do).

        • snaky 1824 days ago
          Presuming it's not formally verified assembly code https://blogs.ncl.ac.uk/andreymokhov/spacecraft-control/
    • shakna 1824 days ago
      Tradeoffs.

      Ada has a history on microcontrollers, and is old enough we can answer with some certainty.

      You may end up having to bypass bottlenecks caused by runtime checks... And that means unsafe code in a critical location.

      Performance or safety. At different parts of your code you may find yourself choosing.

      • 0xDEEPFAC 1824 days ago
        Not really, once you are sure of your Ada code you may choose to disable all checks or selectively disable checks for your production code. Meaning you essentially have no penalty if you choose to go the performance route after significant testing
    • blihp 1824 days ago
      Historically speaking, you didn't. It is only relatively recently that microcontrollers have become powerful and spacious enough for that to be an option.
      • mokus 1824 days ago
        I remember a time when folks sighed wistfully at the idea of being able to afford wasting cycles and memory on C in embedded...
      • shakna 1824 days ago
        Depends what you mean by recent. Ada's been on microcontrollers for more than a couple decades.
    • scoutt 1823 days ago
      You can't, at least safely.

      If you can write arbitrarily to any position in RAM then it's not a memory-safe language. You can hide accesses behind some kind of abstraction (unsafe blocks, libraries or whatever). But that's not a memory-safe language, it's the developer making a contract with himself agreeing to not allowing direct, straight accesses (like those caveats we C/C++ developers do to not make "memory bugs"). Some microcontrollers can protect certain memory areas and raise access faults, but those are the same faults for a program written in C, C++, assembler, etc., and not related to the language.

      One make the safest RTOS in trendy-safe-lang for some small microcontroller, but the end-user (developers) would still be able to write some unsafe code and blow a fuse.

      A different thing is for MCUs with MMU and/or an OS that handles virtual memory per process/thread, but that wasn't your question.

    • ajxs 1824 days ago
      I feel this is quite a nuanced question, I can answer one aspect of it though. When I think of what 'memory-safety' means in this context I think mainly of buffer overflows, dangling pointers, etc, things that have nothing really to do with memory mapped IO. I can't speak for Rust, but I've recently been doing quite a bit of embedded development in Ada ( Disclaimer: Not an expert ) and it supports mapping variables onto a arbitrary memory addresses through a variety of means that don't forgo static typing and compile-time checking. You can use access-types which are the closest thing to C's pointers, they allow for direct memory access while still preventing a good deal of the issues arising from misuse of pointers. I'd say it's well worth checking out for yourself.
    • jbarham 1824 days ago
      Use MicroPython (http://micropython.org/)
    • snaky 1824 days ago
  • systemBuilder 1824 days ago
    Many of the problems he talks about come from the lunacy of the C++ compiler making all sorts of temporaries and calling hidden type conversion functions, making all sorts of assumptions that it should never ever ever make without being told by the programmer. That is why C will always be a better language than c++ on a fundamental level. In this area stroustrup took C in a bad direction.
  • nitwit005 1824 days ago
    The string_view issue has popped up even in relatively safe languages. Java's String class used to do something similar, where substring returned a String that referenced the original String object's internal array to avoid a copy. They gave up on it because too many people accidentally held a references to large strings and leaked memory that way.
    • kccqzy 1824 days ago
      As far as I know, this is rather popular. Haskell's ByteString and Text still types do the same. So does Rust, where most of the time you are very consciously borrowing the original string instead of making copies.
  • 0xe2-0x9a-0x9b 1823 days ago
    There is no call in the article for a deeper C++ code analysis by the compiler. Deeper analysis will be the future of C++ - the article fails to foresee this.
    • notacoward 1823 days ago
      > Deeper analysis will be the future of C++

      As if C++ compile times aren't crazy enough already.

  • shmerl 1824 days ago
    > Nonetheless, the question simply must be how we can accomplish it, rather than if we should try. Even with the most modern C++ idioms available, the evidence is clear that, at scale, it's simply not possible to hold C++ right.

    So, how then? That's the main question indeed :)

  • sys_64738 1824 days ago
    Today's C++ will be considered a cobbled together relic in a few C++ standards time periods!
    • ncmncm 1824 days ago
      By then Rust will also seem a cobbled-together relic, and you will be chasing the new hotness. In the meantime, we are writing the code that makes the world work. In C++.

      By then, many will also be writing it in Rust, and you will be sneering at them, too. It has always bern easy to sneer at people busy making things work.

  • MiroF 1824 days ago
    The stringview example is surprising and certainly something I could have fallen for.

    I feel like the lambda example is pretty contrived. If I was returning a lambda that was capturing values by reference, I would already be pretty wary of UB.

    • umanwizard 1824 days ago
      I guess it comes down to the individual reader. I had to look at the lambda example several times to realize what the problem was with it. I guess my eyes just skim over the capture section unless I have some good reason to look at it. The string_view example, on the other hand, was immediately obviously wrong to me.
  • IshKebab 1824 days ago
    > Dereferencing a nullopt however, gives you an uninitialized value as a pointer, which can be a serious security issue.

    Is this really true? Surely it just gives you an uninitialised `int` (or whatever is in the `optional`)?

  • leshow 1824 days ago
    Rust and Swift have different definitions of memory safety, don't they?
    • 0815test 1824 days ago
      Yes, AIUI Swift does not ensure memory safety for concurrent code like Rust does. You have to expressly opt-in to concurrency-safety, and it's not checked by the compiler. Go definitely has this issue, which is admittedly bizarre for a language that's so often used to code network-oriented services making heavy use of concurrency.
      • favorited 1824 days ago
        That's because Swift doesn't have a first-class concurrency story yet. I imagine that concurrency safety will be sorted out when Swift gets concurrency, but in the meantime all Swift concurrency is using C primitives like pthreads and libdispatch.
        • leshow 1820 days ago
          That sounds like it's going to be a mess. If they introduce compile time checks that have the same strictness as Rust it will break existing code. It seems like Swift has already done that a few times over.
          • favorited 1817 days ago
            Using a new language concurrency feature would require you to change your code anyway, the same way using any new feature would. And when the Elm/Rust-style ownership annotations are made public, they will be opt-in.
      • mlindner 1824 days ago
        I find anyone using Go for networking code bizarre. It's bizarre to me the language ever caught on especially because all their design goals are explicitly the wrong goals. Their goal was to make a "simple like C" language which simply disguises the complexity in writing software. Go simply punts complexity to technical debt of any project and assumes you will throw out your code after a year of using it.
        • AnimalMuppet 1824 days ago
          You are very much wrong, on several fronts.

          First, you obviously have some measuring stick for what the "right" language design goals should be. But what you don't seem to recognize is that other people can validly have other design goals. It's not "your way or they're wrong".

          In fact, given the decades of experience the designers of go have (and wide variety of languages that they have experience with), it's almost certain that you know far less than they do. And yet they still made different choices than you would. Instead of wondering how they could be so stupid, that should make you wonder what they knew that you don't.

          (I've seen some rants from people saying stuff like "they couldn't have made that design decision if they knew anything about Modula 2!" And they miss the talk by Rob Pike where he said (paraphrased) "don't think we're so smart for coming up with that object file format - we stole it from Modula 2". They knew it at a very deep level - almost certainly better than their critic did.)

          Then there's this:

          > Go simply punts complexity to technical debt of any project and assumes you will throw out your code after a year of using it.

          Go was designed for multi-million line code bases that live for decades. Really. Read Rob Pike's notes on Go's design.

          So, yeah, there's a lot about this rant that is factually off in the weeds...

  • sayusasugi 1824 days ago
    Any HN post mentioning C++ will inevitably be invaded by the Rust Evangelism Strikeforce.
    • wutbrodo 1824 days ago
      You picked a bizarre article to make that comment on...It's hardly irrelevant to the post, as the author's central thesis is that there's no case for choosing C++ over languages like Rust and Swift (he says as much in the article).
    • insulanian 1824 days ago
      And rightly so! What's wrong with spreading awareness about safer alternative? If that wasn't the case in the past, we'd still be programming in Cobol and Fortran.
      • pjmlp 1824 days ago
        On the contrary, we had plenty of safer alternatives for systems programming, derived from Algol and PL/I.

        Then came an OS, with a symbolic price instead of the typical market prices of competing OSes, alongside source code tapes, and a systems programming language that was the "JavaScript" of system languages.

        • 0815test 1824 days ago
          The biggest factor was that the OS was written to run on minicomputers as opposed to big iron, and was written in a portable language. Thus it could seamlessly jump over to micros (as soon as these became powerful enough, of course), and even later on to embedded and "wearable" compute. You just can't do that unless you're writing in a highly flexible and highly portable language - more like the FORTRAN of systems languages than anything like JavaScript!
          • pjmlp 1824 days ago
            Ironically, systems written in hardware 10 years older than PDP-11, thus with less resources, were written in safer system languages, go figure.
    • throwupaway123 1824 days ago
      The Rust Evangelism Strikeforce only exists in completely inane comments like yours, maybe instead of posting memes you comment on the actual content of the article not the headline?
  • fetbaffe 1824 days ago
    Without any data to back this up, my guess is that there is no good reason to pick C++ for a new project except when the developer is already fluent in C++.

    Assume we have this abstract developer that has a good knowledge in programming theory but has no experience in programming languages.

    The developer starts a new project, but in what language?

    web: Don't see any reason for this. Exist lots of great alternatives. This is not really one of C++ strengths so is not that strange.

    desktop-GUI: Probably one of the biggest strengths of C++ is the Qt framework, it is solid choice. I can see this a possible choice. However with electron dominating & PWA:s becoming a more viable option it is probably much higher chance that a HTML/JS environment is picked instead, especially how it already dominates the web. And by using TypeScript you can do it an solid language.

    mobile apps: Most apps are written in some web technology or directly with Swift or Java. Qt has some support for this, but not sure how widely used. My experience with NDK was not pleasant. I can't really see this as a viable option.

    embedded: I don't do embedded, but my understanding is that plain C is much more common here & if faster development is needed you integrate something like Lua. Maybe?

    memory safe: use rust I guess.

    compiled binary: Use golang, no complicated buildstep.

    Parallellism: Better to use a language designed for this like erlang.

    Game development: For the majority of games today a scripting language like JavaScript or Lua is good enough. HTML/JS has some really good frameworks for game development today.

    3D game development: Probably a good fit to use C++, but I think that C# with Unity is a much better choice. Great framework, good community, however C++ is not bad choice for this. Possible.

    Commandline tool: If the developer is building the next grep, C++ could fit that, but most commandline tools does not have that performance requirement, Probably do some HTTP, JSON decoding, DB access. Bash is good enough or any other dynamic language.

    Scientific: my understanding is that today this is mostly python or matlab. Maybe?

    System development (drivers etc): I know too little about this to make a good assessment, to be fair I put this as a possible choice.

    And if the developer do decides to use C++ for a new project, the initial cost is quite high to just understand the basics, even if he/she uses the latest C++ version. Copy constructors, lvalue & rvalue (xvalue, glvalue, prvalue...), move semantics, const refs, rvo, smart pointers, auto etc

    Any good arguments to pick C++ for a new project?

    • pjmlp 1824 days ago
      NVidia is designing their GPUs with C++ in mind as source language.

      desktop-GUI: I guess you might be joking here with Electron, I rather use my GPU for something else other than blinking cursors. Even with Cocoa, UWP and WPF, the underlying UI shaders are written in C++.

      Embedded: Yes, C does rule over C++, which is a reason why embedded is so open to security exploits due to wrong manipulation of string and arrays.

      Parallellism: HPC, FinTech, GPGPU all domains where C++ rules for the time being.

      3D game development: C++ is king here, even with Unity the core engine is written in C++. Yes many of us hope to see the day when Unity is 100% written in a mix of C# and HPC#, but even then, LLVM will be part of the stack.

      Scientific: Someone needs to write those Fortran and C++ libs called by Python and Matlab.

      System development: Google, Apple and Microsoft use C++ on their driver layers for their respective OSes.

      IDE tooling: C++ is known for not having IDEs that match what Java/.NET are capable of. Languages that want to take C++'s place, are even worse than C++ in IDE tooling.

      • fetbaffe 1824 days ago
        desktop-GUI: I'm not a fan of Electron either, however the community seems not to be joking about picking it. It seems like most projects are going that direction. None of your arguments here are about a new projects, just about existing technologies.

        Embedded: But how is that an argument for C++? I can do safe stuff in Lua without the hassle of C++ & then use C when needed.

        Parallellism: Agree. Here we have a case.

        Scientific: Yes, someone needs to write the underlying libraries for Python & Matlab, but if you are starting a new project, do you actually start writing a library first or do you use an existing one?

        IDE tooling: Yes, good tooling can be a good argument in itself to pick a technology. With C++ maturity is as a clear advantage, however some of the languages that i listed do have quite nice tooling today, e.g. C# & TypeScript.

        • pjmlp 1824 days ago
          desktop-GUI: What community? Web devs trying to write desktop apps?

          Embedded: Try to write a safe string and vector with bounds checking, or IO port access in C like in C++ type system allows for. Lua is nice for hobby, not production class hardware deployments.

          Scientific: Depends, many libraries are yet to be written.

          IDE tooling: Typescript and C# audience isn't the same as those using raw C++.

          • fetbaffe 1824 days ago
            desktop-GUI: Yes, frontend today is a combination of Web & apps, C++ does not fit in either. UWPs can be written with JS or C#.

            Writing desktop-GUI is also on the decline, however I could see potential increase in desktop-GUI-programs if governments continue to pass bad regulations of the web like content filters & link tax.

            Embedded: How come C++ is not dominating embedded? According to you, it should.

            IDE tooling: If you are writing a 2D-game or desktop-GUI it is.

            Even though C++ is used as an important building block for other technologies, too survive, C++ must attract a new generation of developers, to continue carrying the torch & develop it further. Does it? I have seen little evidence of that. My impression is that developers learn C++ because of existing projects. Sure, there exists lots of good C++ projects out there that will continue attract developers & push the language further, but will it be enough? I'm not convinced.

            • pjmlp 1823 days ago
              desktop-GUI: Many UWP APIs are only accessible to C++/CX. Plus only UWP controls written in C++ are usable from Win32 side.

              I have spent the last 4 years doing green field desktop GUIs, apparently those customers haven't got the news.

              HTML5 APIs still aren't a match for plenty of native APIs.

              Embedded: Religious hate against C++ from older timer devs, as discussed in several CppCon and Meeting C++ talks, e.g. Dan Saks has quite a few of them.

              CppCon 2019 will change location, because they no longer can fit everyone on the old location.

    • guggle 1823 days ago
      > Any good arguments to pick C++ for a new project?

      Probably niche, but almost all audio dsp (vst &co) uses C++. See JUCE framework. Possibly other "multimedia" stuff (video, image manipulation, etc)

    • UsernameUsernam 1824 days ago
      Language VM:

      HotSpot, ART, V8, SpiderMonkey, Chakra, JSC, Dart, and CLR are all written in C++. Are there any modern serious language VMs that aren't written in C++?

      • fetbaffe 1823 days ago
        Lua, LuaJIT, PHP, Python, PyPy, Erlang
  • ycombonator 1824 days ago
    "Don't be clever". Yes. In the CppCon 2017 opening keynote: The Learning and Teaching Modern C++. The accompagnying slide says "Don't be (too) clever" but I can't promounce parentheses :-). My point was to discourage overly clever code because "clever code" is hard to write, easy to get wrong, harder to maintain, and often no faster than simpler alternatives because it can be hard to optimize. - Bjarne Stroustrup
  • foobar_ 1824 days ago
    Can virtualisation solve this ? Is it possible to have a virtualised environment like Qubes but for programs ?
    • simias 1824 days ago
      Virtualization can help reduce the harm caused by a misbehaving program but it won't magically make the program behave correctly.

      Having a program cause a memory violation and be killed by the OS is the best possible outcome in this case, it stops the program from doing any damage and you get a clear symptom of the problem for debugging.

      It's when the issue is not that obvious that you're in real trouble because it may start behaving erratically, corrupt data and be exploited by malicious actors to get access to resources that shouldn't be exposed.

    • jcelerier 1824 days ago
      yes, that's called a "process" in operating systems.
  • 781 1824 days ago
    There was an article recently about a behavior which in a recent version of C++ was made from defined behavior into undefined behavior, because making it undefined allowed for better compiler optimizations.

    I always thought that undefined behaviors were historical accidents. But apparently sometime people just say "hey, lets add a few more undefined behaviors"

    This is the insanity of C++

    • taspeotis 1824 days ago
      I am happy to be proven wrong but I feel that they’d never change defined behavior to undefined. Unspecified to undefined sounds more likely.

      There’s this article [1] about compilers exploiting undefined behavior but ... it’s already undefined behavior.

      [1] https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=63...

    • sbov 1824 days ago
      This sounds wrong. You're probably thinking of undefined behavior that happened to behave the same across all available C++ compilers, so people began to rely on it. But since it was technically undefined behavior, optimizers were free to take advantage of it, and when they started to it smashed any code that relied on that undefined behavior behaving in a consistent way.
      • zaphar 1824 days ago
        While technically correct, (which is the best kind of correct), the sad reality is that de-facto standards matter. Languages that understand this tend to be safer than languages that do not.
    • dman 1824 days ago
      Can you please find a reference? That sounds interesting.
      • tmyklebu 1823 days ago
        Not exactly what you asked for, but C11 added this bit that was not there in C99:

        An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression, may be assumed by the implementation to terminate.

        And C++11 added this bit that wasn't there in C++03:

        A loop that, outside of the for-init-statement in the case of a for statement, - makes no calls to library I/O functions, and - does not access or modify volatile objects, and - performs no synchronization operations (1.10) or atomic operations (Clause 29) may be assumed by the implementation to terminate.

        [Note: This is intended to allow compiler transformations, such as removal of empty loops, even when termination cannot be proven. -- end note]

    • systemBuilder 1824 days ago
      In the article the compiler noticed that because of the loop iterator reading the array a[i+1] ... under the assumption the program never indexed beyond the end of the array ... The compiler assumed the loop variable was always less than the limit and therefore changed the loop into an infinite loop. This is the insanity of some of the newer compilers which assume your program never performs an undefined behavior. They compile your mildly buggy slightly undefined code into shit*.