The Defold engine code style

(defold.com)

33 points | by josemwarrior 1425 days ago

2 comments

  • Hokusai 1424 days ago
    > Someone once told me that pointers are dangerous, and I frankly don’t understand that.

    Lack of ownership is one of the dangerous things. As, I see on the article itself “Not only that each weak_ptr.lock() was very expensive, it was also symptom of unclear ownership of the data.”

    Smart pointers are good at declaring memory ownership.

    > All those tiny allocations that the stl containers made, were like a death by a thousand needles.

    A possible solution is the use of custom allocators.

    > until they were replaced with something more fitting for the tasks.

    This comment is key. Too often I have seen C++ developers critical of classes, STL, and other C++ features because engine Y or Z does not use them when are actually the best option for a particular project.

    I appreciate that the comment is present.

    > Often you hear people say that writing your own containers would introduce bugs, and that you should stick with stl. I find this problematic, since this is coming from engineers that should know quite well how to write an array/hashtable container.

    But, this is quite a stupid comment. E.g : https://forum.defold.com/t/scaling-collision-object-when-gam...

    That children objects need to scale with its parent is something that all engines do, but, I have had to fix this kind of scaling in the past in a custom version of the Irrlicht engine.

    Should engineers know better. Maybe, but good engineers understand that errors may happen and put the needed measures to avoid or mitigate them. A useful one is to use already proven code.

    Was an error in their custom made containers what caused the error in scaling when looping over the object children? We will never know.

    • beetwenty 1424 days ago
      That's a data synchronization error across multiple related pieces of data, which isn't the same as a POD container like a hashtable corrupting itself.

      The standard hammer you would apply to enforce the synchronization in all cases is relational integrity, which is too expensive for a game's runtime environment. You don't always want to synchronize everything all of the time if you want to hit a high framerate target, and a lot of performance features boil down to relaxations on when synchronization occurs. Much of the detailed design in writing a game main loop is in dealing with the many consequences of supporting that.

      That's why their recommendations on errors also refer to the earlier build process and Lua integration; by the time the data hits the inner loops of the engine, there shouldn't be a case where it's invalid, because if it is, then you can't have the optimized version either.

    • polityagent 1424 days ago
      Am I mistaken in thinking that the issue was scaling not supported by the box2d physics library and that they needed a custom workaround in their engine?
      • SvenAndersson 1423 days ago
        You are not mistaken (if I remember correctly, full disclosure; I used to be an engine dev on Defold).

        Not sure why Hokusai tried to point to that forum thread in this case, which shouldn't have anything to do at all with his comment? I guess, we will never know. :)

  • jdashg 1424 days ago
    Many of these feel like throwing the baby out with the bathwater, where, out of fear of misuse, tools are thrown away.

    For example, while `std::string` does have issues, number of allocations more indicative of poor usage than an issue with the class itself.

    Other design choices are fine per se, but the reasons given are strange:

    > And, since we don’t use classes or inheritance, we don’t have any need for RTTI.

    > It also has another benefit; since all data is already verified to be ok by the data pipeline, you don’t need C++ exceptions in the engine.

    Tons of projects (Chrome and Firefox to name two) build without RTTI or exceptions, but with extensive classes, extensive inheritance, and extraordinarily extensive validation code.

    Many of these issues may be due to the age of the project, as much of the thinking seems, indeed, to be stuck about 10 years in the past. It does not match my experience with modern C++ in similar fields.

    • beetwenty 1424 days ago
      Std::string half of all allocations in the Chrome browser process (2014) [0]

      The key difference is that web browsers support a highly arbitrary and mutable dataset. A game engine's assets exist in a mostly-static space. The things that are allocated at runtime are things that should have a known maximum, because going over that maximum will start to overrun latency targets. Many assets are streamed, but still hit a certain size and bandwidth budget, and so are still "basically static" - some degree of compilation and configuration at runtime always takes place for rendering features. The genuinely mutable part of game state while playing is in a comparatively confined space, and that allows a lot to be pushed to build time, where it's easier to validate and to maintain.

      The features that change this picture are editors and arbitrary data imports. Web browsers are all about these two things. When you click a link the document may load thousands of gigantic images, and I might try to copy-paste the entire contents of Wikipedia into a text box. The engineering requirements are much broader as a consequence, and there are more rationales to need genuine "black box" interfaces supporting a complex protocol, as opposed to a static "calling function switches on a specified enum" approach, which is sufficient for almost every dynamic behavior encountered in game engines.

      [0] https://news.ycombinator.com/item?id=8704318