I have complicated feelings about TDD

(buttondown.email)

323 points | by jwdunne 587 days ago

100 comments

  • PheonixPharts 587 days ago
    The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them, and often make design choices iteratively as we start to realize how our software should behave.

    This ultimately means, what most programmers intuitively know, that it's impossible to write adequate test coverage up front (since we don't even really know how we want the program to behave) or worse, test coverage gets in the way of the iterative design process. In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

    The obvious exception to this, where I still use TDD, is when implementing a well defined spec. Anytime you need to build a library to match an existing protocol, well documented api, or even an non-trivial mathematical function, TDD is a tremendous boon. But this is only because the program behavior is well defined.

    The times where I've used TDD and it makes sense it's be a tremendous productivity increase. If you're implementing some standard you can basically write the tests to confirm you understand how the protocol/api/function works.

    Unfortunately most software is just not well defined up front.

    • happytoexplain 587 days ago
      >Unfortunately most software is just not well defined up front.

      This is exactly how I feel about TDD, but it always feels like you're not supposed to say it. Even in environments where features are described, planned, designed, refined, written as ACs, and then developed, there are still almost always pivots made or holes filled in mid-implementation. I feel like TDD is not for the vast majority of software in practice - it seems more like something useful for highly specialist contexts with extremely well defined objective requirements that are made by, for, and under engineers, not business partners or consumers.

      • ethbr0 587 days ago
        I forget which famous Unix personality the quote / story comes from, but it amounts to "The perfect program is the one you write after you finish the first version, throw it in the garbage, and then handle in the rewrite all the things you didn't know that you didn't know."

        That rings true to my experience, and TDD doesn't add much to that process.

        • michaelchisari 587 days ago
          Screenwriters encourage a "vomit draft" -- A draft that is not supposed to be good but just needs to exist to get all the necessary parts on the page. Then a writer can choose to either fix or rewrite, but having a complete presentation of the story is an important first step.

          I've advocated the same for early projects or new features. Dump something bad and flawed and inefficient, but which still accomplishes what you want to accomplish. Do it as fast as possible. This is your vomit draft.

          I strongly believe that the amount a team could learn from this would be invaluable and would speed up the development process, even if every single line of code had to be scrapped and rebuilt from scratch.

          • BearfootCoder 587 days ago
            The idea of the vomit draft works for narrative text because it's aimed at human consumers and humans are very adaptable when it comes to accepting input. We can absorb a whole bunch of incoherent, inconsistent content and still find the parts in it which make sense, do useful and interesting things.

            An executing program is a lot less forgiving, for obvious and unavoidable reasons.

            What TDD brings to the table when you are building a throwaway version is that it helps to identify and deal with the things which are pure implementation errors (failure to handle missing inputs, format problems, regression failures and incompatibilities between different functional requirements). In some cases it can speed up the delivery of a working prototype, or at least reduce the chance that the first action that the first non-developer user of your system does causes the whole application to crash.

            Genuine usability failures, performance issues and failure to actually do what the user wanted will not get caught by TDD, but putting automated tests in early means that the use of the prototype as a way of revealing unavoidable bugs is not downgraded by the existence of perfectly avoidable ones. It may also make it easier to iterate on the functionality in the prototype by catching regressions during the changes cycle, although I'll admit that the existence of lots of tests here may well be a double-edged sword. It very much depends on how much the prototype changes during the iterative phase, before the whole thing gets thrown away and rebuilt.

            And, when you come to building your non-throw away version, the suite of tests you wrote for your prototype give you a check list of things you want to be testing in the next iteration, even if you can't use the code directly. And it seems likely enough that at least some of your old test code can be repurposed more easily than writing it all from scratch.

          • lodovic 587 days ago
            This is how I write documents - start with the table of contents and fill in the blanks until the document is done.

            Isn't there some law as well that states that successful software projects start with a working prototype, and software designed from the ground up is destined to fail?

          • spitfire 587 days ago
            I'm using "vomit draft" from now on.
        • ppseafield 587 days ago
          Reminds me of the chapter "Plan to Throw One Away" from The Mythical Man Month.

          > In most projects, the first system built is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done piece-by-piece. But all large-system experience shows that it will be done. Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time.

        • sdevonoes 587 days ago
          Exactly. I write my programs/systems a few times. Each time I discard the previous version and start from scratch. I end up writing code that's easy to test and easy to swap parts if needed. I also know what TDD brings to the table. On top of that I have over 15 years of professional experience... so I usually know how to write software that complies with what we usually call "good code", so TDD offers me zero help.

          For more junior engineers, I think TDD can help, but once you "master" TDD, you can throw it out of the window: "clean code" will come out naturally without having to write tests first.

          • dfinninger 587 days ago
            I was taken by TDD for about six months, years ago. I always felt that I was never good at it because edge cases I hadn’t thought of came up or the interface wasn’t actually the best/cleanest/most maintainable way to write that code.

            But it got me thinking about testing while writing the class instead of shoehorning a class into a test just before the PR went up. That’s what I think my takeaway was. To this day I think about not just how clean/maintainable the code is, but also how testable the code is while I am writing it. It really helps keep monkeypatching and mocking down.

        • jonstewart 587 days ago
          Ah, but it's the _third_ version, because of Second System Effect. So, really, plan to throw two away. https://en.wikipedia.org/wiki/Second-system_effect
          • kastagg 587 days ago
            If you know you're throwing two away, you won't even really start in earnest until the third one.
            • rileymat2 587 days ago
              This is one of the better points I have ever seen made, bravo. So true.
        • pez_dev 587 days ago
          I’ve heard something similar once, but I don’t remember where:

          “Write it three times. The first time to understand the problem, the second time to understand the solution and the third time to ship it.”

        • pricechild 587 days ago
        • lkrubner 587 days ago
          Whoever said that specific quote, it is a paraphrase of a point that Alan Kay has been making since the late 1970s. His speeches, in which he argued in favor of SmallTalk and dynamic program, make the point over and over again. I believe he said almost exactly the words you are quoting.
          • ethbr0 587 days ago
            That was my paraphrase. From fuzzy memory, don't think it was Alan Kay, but I'm sure the approach is common with that crowd.

            The only other clue I have is that it was apparently someone who was super productive and wrote a ton of the early common Unix tools.

        • RexM 587 days ago
          I'm almost positive Joe Armstrong has some version of this quote. I couldn't find it, though.
        • intelVISA 587 days ago
          I wish more places followed this.

          Doesn't matter how good you are the v1 of a program in an unknown area is always complete crap but lets you write an amazing v2 if you paid attention making v1.

          • andy_ppp 587 days ago
            I think this is what the best programmers do, they are always rewriting things…
        • dgb23 587 days ago
          Well except for the second step where you aim precisely.
        • roeles 586 days ago
          I've heard Sape Mullender from bell labs (and plan 9) say it during classes he tought.
        • FridgeSeal 587 days ago
          Normalise early-stage re-writes.

          (Only slightly joking)

      • gofreddygo 585 days ago
        Yes its not well defined, neither before nor after implementing. I've made peace with accepting it never will be.

        An implementation without definition, and a whole host of assumptions gets delivered as v1.

        Product expectations get lowered, bugs and defects raised, implementation is monkey patched as v2.

        devs quit, managers get promoted, new managers hire new devs, they ask for the definition and they're asked to follow some flavor of the year process (TDD, Agile, whatever).... rinse and repeat v3.

        Sad. True. Helpless. Hopeless.

      • 8n4vidtmkvmk 587 days ago
        it doesn't matter how well your app is designed, your UX designer is not going to tell you you need a function that does X. you just build something that looks like the thing they want, and then write some tests to make sure someone doesn't break that thing, and if you have to write a dozen functions to create it and they're testable then you test them but you don't say oh no i can't write a 13th function now because that wasn't part of the preordained plan
    • theptip 587 days ago
      I think part of what you are getting at here also points to differences in what people mean by “unit test”.

      It’s always possible to write a test case that covers a new high-level functional requirement as you understand it; part of the skill of test-first (disclaimer - I use this approach sometimes but not religiously and don’t consider myself a master at this) is identifying the best next test to write.

      But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation; if you are writing those sort of UTs then in some sense you are doing gradient descent with a too-small step size. There is no appreciable gradient to move down; adding a new test for a small method doesn’t always get you closer to adding the next meaningful bit of functionality.

      When I have done best with TDD is when I start with what most would call “functional tests” and test the behaviors, which is isomorphic to the design process of working with stakeholders to think through all the ways the product should react to inputs.

      I think the early TDD guys like Kent Beck probably assumed you are sitting next to a stakeholder so that you can rapidly iterate on those business/product/domain questions as you proceed. There is no “upfront spec” in agile, the process of growing an implementation leads you to the next product question to ask.

      • merlincorey 587 days ago
        > But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation; if you are writing those sort of UTs then in some sense you are doing gradient descent with a too-small step size. There is no appreciable gradient to move down; adding a new test for a small method doesn’t always get you closer to adding the next meaningful bit of functionality.

        In my experience, the best time to do "test for each method on a class" or "test for each function in a module" is when the component in question is a low level component in the system that must be relied upon for correctness by higher level parts of the system.

        Similarly, in my experience, it is often a waste of effort and time to do such thorough low level unit testing on higher level components composed of multiple lower level components. In those cases, I find it's much better to write unit tests at the highest level possible (i.e. checking `module.top_level_super_function()` inputs produce expected outputs or side effects)

        • remexre 587 days ago
          > is when the component in question is a low level component in the system that must be relied upon for correctness by higher level parts of the system.

          And then, property tests are more likely to be possible, and IMO should be preferred!

          • ENGNR 587 days ago
            TTRD

            test driven re-design?

            • knicholes 587 days ago
              TDRD
              • dwattttt 587 days ago
                The first acronym cleverly demonstrates the redesign
      • commandlinefan 587 days ago
        > a lot of people cast “unit test” as “test for each method on a class” which is too low-level

        Definitely agree with you here - I've seen people dogmatically write unit tests for getter and setter methods at which point I have a hard time believing they're not just fucking with me. However, there's a "sweet spot" in between writing unit tests on every single function and writing "unit tests" that don't run without a live database and a few configuration files in specific locations, which (in my experience) is more common when you ask a mediocre programmer to try to write some tests.

        • icedchai 587 days ago
          I'm having flashbacks to a previous workplace. I was literally asked to write unit tests for getters and setters. I complained they were used elsewhere in the code, and therefore tested indirectly anyway. Nope, my PR would not be "approved" until I tested every getter and setter. I think I lasted there about 6 months.
      • mkl95 587 days ago
        > But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation;

        Those tests suit a project that applies the open-closed principle strictly, such as libraries / packages that will rarely be modified directly and will mostly be used by "clients" as their building blocks.

        They don't suit a spaghetti monolith with dozens of leaky APIs that change on every sprint.

        The harsh truth is that in the industry you are more likely to work with spaghetti code than with stable packages. "TDD done right" is a pipe dream for the average engineer.

      • slevcom 587 days ago
        This is well said.

        I always suspect that many people who have a hard time relating to TDD already have experience writing these class & method oriented tests. So they understandably struggle with trying to figure out how to write them before writing the code.

        Thinking about tests in terms of product features is how it clicked for me.

        That being said, as another poster above mentioned, using TDD for unstable or exploratory features is often unproductive. But that’s because tests for uncertain features are often unproductive, regardless if you wrote them before or after.

        I once spent months trying to invent a new product using TDD. I was constantly deleting tests because I was constantly changing features. Even worse, I found myself resisting changing features that needed changing because I was attached to the work I had done to test them. I eventually gave up.

        I still use TDD all the time, but not when I’m exploring new ideas.

        • majikandy 587 days ago
          I do the same. But almost always wish I had done TDD. The times I can bring myself to git reset —-hard after I have finished exploring and then TDD it in, the code benefits. Often though I can’t bring myself to do it and I retro fit in a few tests and reorder the commit history :)
      • P5fRxh5kUvp2th 587 days ago
        The above poster used 'TDD', not 'unit test', they are not the same thing.

        You can (and often should!) have a suite of unit tests, but you can choose to write them after the fact, and after the fact means after most of the exploration is done.

        I think if most people stopped thinking of unit tests as a correctness mechanism and instead thought of them as a regression mechanism unit tests as a whole would be a lot better off.

        • adhesive_wombat 587 days ago
          Also as an dependency canary: when your low level object tests start demanding access to databases and config files and networking, it's time for a think.

          Also a passing unit test always provides up-to-date implicit documentation on how to use the tested code.

        • geophile 587 days ago
          None of this either/or reasoning is correct, in my experience. In practice, I write tests both before and after implementation, for different reasons. In practice, my tests both test correctness, and of course they also work as regression tests.

          Writing before the fact allows you to test your mental model of the interface, unspoiled by having the implementation fresh in your mind. (Not entirely, since you probably have some implementation ideas very early on.)

          Writing tests after the fact is what you must do to explore 1) weak points that occur to you as you implement, and 2) bugs. After-the-fact testing also allows you to hone in on vagueness in the spec, which may show up as (1) or (2).

          • P5fRxh5kUvp2th 587 days ago
            There's nothing binary about the observation that unit test and TDD are different sets.
        • savolai 587 days ago
          Hi. I got curious. Do you mean ”regression detection mehanism”? Could you elaborate? Thanks.
    • generalk 587 days ago
      +1 on "well defined spec" -- a lot of Healthcare integrations are specified as "here's the requests, ensure your system responds like this" and being able to put those in a test suite and know where you're at is invaluable!

      But TDD is fantastic for growing software as well! I managed to save an otherwise doomed project by rigorously sticking to TDD (and its close cousin Behavior Driven Development.)

      It sounds like you're expecting that the entire test suite ought to be written up front? The way I've had success is to write a single test, watch it fail, fix the failure as quickly as possible, repeat, and then once the test passes fix up whatever junk I wrote so I don't hate it in a month. Red, Green, Refactor.

      If you combine that with frequent stakeholder review, you're golden. This way you're never sitting on a huge pile of unimplemented tests; nor are you writing tests for parts of the software you don't need. For example from that project: week one was the core business logic setup. Normally I'd have dove into users/permissions, soft deletes, auditing, all that as part of basic setup. But this way, I started with basic tests: "If I go to this page I should see these details;" "If I click this button the status should update to Complete." Nowhere do those tests ask about users, so we don't have them. Focus remains on what we told people we'd have done.

      I know not everyone works that way, but damn if the results didn't make me a firm believer.

      • wenc 587 days ago
        The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.

        Unit tests are still easy to write but most complex software have many parts that combine combinatorially and writing integration tests requires lots of mocking. This investment pays off when the design is stable but when business requirements are not that stable this becomes very expensive.

        Some tests are actually very hard to write — I once led a project that where the code had both cloud and on-prem API calls (and called Twilio). Some of those environments were outside our control but we still had to make sure they we handled their failure modes. The testing code was very difficult to write and I wished we’d waited until we stabilized the code before attempting to test. There were too many rabbit holes that we naturally got rid of as we iterated and testing was like a ball and chain that made everything super laborious.

        TDD also represents a kind of first order thinking that assumes that if the individual parts are correct, the whole will likely be correct. It’s not wrong but it’s also very expensive to achieve. Software does have higher order effects.

        It’s like the old car analogy. American car makers used to believe that if you QC every part and make unit tolerances tight, you’ll get a good car on final assembly (unit tests). This is true if you can get it right all the time but it made US car manufacturing very expensive because it required perfection at every step.

        Ironically Japanese carmakers eschewed this and allowed loose unit tolerances, but made sure the final build tolerance worked even when the individual unit tolerances had variation. They found this made manufacturing less expensive and still produced very high quality (arguably higher quality since the assembly was rigid where it had to be, and flexible where it had to be). This is craftsman thinking vs strict precision thinking.

        This method is called “functional build” and Ford was the first US carmaker to adopt it. It eventually came to be adopted by all car makers.

        https://www.gardnerweb.com/articles/building-better-vehicles...

        • bostik 587 days ago
          > Some tests are actually very hard to write — I once led a project that where the code had both cloud and on-prem API calls

          I believe that this is a fundamental problem of testing in all distributed systems: you are trying to test and validate for emergent behaviour. The other term we have for such systems is: chaotic. Good luck with that.

          In fact, I have begun to suspect that the way we even think about software testing is backwards. Instead of test scenarios we should be thinking in failure scenarios - and try to subject our software to as much of those as possible. Define the bounding box of the failure universe, and allow computer to generate the testing scenarios within. EXPECT that all software within will eventually fail, but as long as it survives beyond set thresholds, it gets a green light.

          In a way... we'd need something like a bastard hybrid of fuzzing, chaos testing, soak testing, SRE principles and probabilistic outcomes.

          • steve_gh 587 days ago
            >I believe that this is a fundamental problem of testing in all distributed systems: you are trying to test and validate for emergent behaviour. The other term we have for such systems is: chaotic. Good luck with that

            Emergent behaviour is complex, not chaotic. Chaos comes from sensitive dependence on initial conditions. Complexity is associated with non-ergodic statistics (i.e. sampling across time gives different results to sampling across space).

            • bostik 587 days ago
              Thank you for the correction. And indeed, "complex" would have been the right term. My bad.
          • throwawaymaths 587 days ago
            I work in Erlang virtual machine (elixir) and I am regularly writing tests against common distributed systems failures? You don't need property tests (or jeppsen maelstrom - style fuzzing) for your 95% scenarios. Distributed systems are not magically failure prone.
        • somewhereoutth 587 days ago
          > TDD also represents a kind of first order thinking that assumes that if the individual parts are correct, the whole will likely be correct. It’s not wrong

          In fact it is not just wrong, but very wrong, as your auto example shows. Unfortunately engineers are not trained/socialised to think as holistically as perhaps they should be.

          • kazinator 587 days ago
            The non-strawman interpretation of TDD is the converse: if the individual parts are not right, then the whole will probably be garbage.

            It's worth it to apply TDD to the pieces to which TDD is applicable. If not strict TDD than at least "test first" weak TDD.

            The best candidates for TDD are libraries that implement pure data transformations with minimal integration with anything else.

            (I suspect that the rabid TDD advocates mostly work in areas where the majority of the code is like that. CRUD work with predictable control and data flows.)

            • wenc 587 days ago
              Yes. Agree about TDD being more suited to low dependency software like CRUD apps or self contained libraries.

              Also sometimes even if the individual parts aren’t right, the whole can still work.

              Consider a function that handles all cases except for one that is rare, and testing for that case is expensive.

              The overall system however can be written to provide mitigations upon composing — eg each individual function does a sanity check on its inputs. The individual function itself might be wrong (incomplete) but in the larger system, it is inconsequential.

              Test effort is not a 1:1. Sometimes the test can be many times as complicated to write and maintain as the function being tested because it has to generate all the corner cases (and has to regenerate them if anything changes upstream). If you’re testing a function in the middle of a very complex data pipeline, you have regenerate all the artifacts upstream.

              Whereas sometimes an untested function can be written in such a way where it is inherently correct from first principles. An extreme analogy would be the Collatz conjecture. If you start by first writing the test, you’d be writing an almost infinite corpus of tests — on the flip side, writing the Collatz function is extremely simple and correct up to large finite number.

              • crazygringo 587 days ago
                This is completely counter to all my experience.

                Computer code is an inherently brittle thing, and the smallest errors tend to cascade into system crashes. Showstopper bugs are generated from off-by-one errors, incorrect operation around minimum and maximum values, a missing semicolon or comma, etc.

                And doing sanity check on function inputs addresses only a small proportion of bugs.

                I don't know what kind of programming you do, but the idea that a wrong function becomes inconsequential in a larger system... I feel like that just never happens unless the function was redundant and unnecessary in the first place. A wrong function brings down the larger system feels like the only kind of programming I've ever seen.

                Physical unit tolerances don't seem like a useful analogy in programming at all. At best, maybe in sysops regarding provisioning, caches, API limits, etc. But not for code.

                • wenc 587 days ago
                  > I don't know what kind of programming you do, but the idea that a wrong function becomes inconsequential in a larger system... I feel like that just never happens unless the function was redundant and unnecessary in the first place. A wrong function brings down the larger system feels like the only kind of programming I've ever seen.

                  I think we’re talking extremes here. An egregiously wrong function can bring down a system if it’s wrong in just the right ways and it’s a critical dependency.

                  But if you look at most code bases, many have untested corner cases (which they’re likely not handling) but the code base keeps chugging along.

                  Many codebases are probably doing something wrong today (hence GitHub issues). But to catastrophize that seems hyperbolic to me. Most software with mistakes still work. Many GitHub issues aren’t resolved but the program still runs. Good designs have redundancy and resilience.

                • YZF 587 days ago
                  A counter to that could be all the little issues found by fuzz testing legacy systems and static analysis. Often in widely used software where those issues did not indeed manifest. Unit tests also don't prove correctness, they're as good as the writer of the unit test's ability to predict failure.

                  I can tell you that most (customer) issues in the software I work on are systemic issues, the database fails (widely used OSS) can corrupt under certain scenarios. They can be races, behaviour under failure modes, lack of correctness on some higher order (e.g. having half failed operations), the system not implementing the intent of the user. I would say very rarely those are issues that would have been caught by unit testing. Now integration testing and stress testing will uncover a lot of those. This is a large scale distributed system.

                  Now sometimes after the fact a unit test can somehow be created to reproduce the specific failure, possibly at great effort. That's not really something that useful at this point. You wouldn't write that in advance for every possible failure scenario (infinite).

                  All that said, sometimes there's attacks on systems that relate to some corner cases errors, which is a problem. Static analysis and fuzzers are IMO more useful tools in this realm as well. Also I think I'm hearing "dynamic/interepreted" language there (missing semicolons???). Those might need more unit testing to make up for the lack of compiler checks/warnings/type safety for sure.

                  The other point that's often missed is the drag that "bad" tests add to a project. Since it's so hard to write good tests when you mandate testing you end up with a pile of garbage that makes it harder to make progress. Other factors are the additional hit you take maintaining your tests.

                  Basically choosing the right kind of tests, at the right level, is judgement. You use the right tool for the right job. I rarely use TDD but I have used it in cases where the problem can relatively easily be stated in terms of tests and it helps me get quick feedback on my code.

                  EDIT: Also as another extreme thought ;) some software out there could be working because some function isn't behaving as expected. There's lots of C code out there that uses things that are technically UB but do actually have some guarantee under some precise circumstances (but idea but what can you do). In this case the unit test would pass despite the code being incorrect.

                • OrderlyTiamat 586 days ago
                  I work in software testing, and I've seen this many times actually. Small bugs that I notice because I'm actually reading the code, which became inconsequential because that code path is never used anymore or the result is now discarded, or any of a number of things that change the execution environment of that piece of code.

                  If anything I'm wondering the same question about you. If you find it so inconceivable that a bug is hiding in working code that is held up because the calling environment around it, than you must not have worked with big or even moderately sized codebases at all.

              • P5fRxh5kUvp2th 587 days ago
                > sometimes even if the individual parts aren’t right, the whole can still work.

                And in fact, fault tolerance with the assumption that all of it's parts are unreliable and will fail quickly makes for more fault tolerant systems.

                The _processes and attitude_ that cause many individual parts to be incorrect will also cause the overall system to be crap. There's a definite correlation, but that correlation isn't about any specific part.

              • kazinator 587 days ago
                > Also sometimes even if the individual parts aren’t right, the whole can still work.

                Yes it can, but the foundation is shaky, and having to make changes to it will tend to be scary.

                • wenc 587 days ago
                  Yes. Though my point is not that we should aim for a shaky foundation, but that if one is a craftsman one ought to know where to make trade offs to allow some parts of the code to be shaky with no consequences. This ability to understand how to trade off perfection for time — when appropriate — is what distinguishes senior from junior developers. The idea of ~100% correct code base is an ideal — it’s achieved only rarely on very mature code bases (eg TeX, SQLite).

                  Code is ultimately organic, and experienced developers know where the code needs be 100% and where the code can flex if needed. People have this idea that code is like mathematics where if one part fails, every part fails. To me if that is so, the design too tight and brittle and will not ship on time. But well designed code is more like an organism that has resilience to variation.

          • hbn 587 days ago
            If individual parts being correct meant the whole thing will be correct, that means if you have a good sturdy propeller and you put it on top of your working car, then you have a working helicopter.
        • pmarreck 587 days ago
          > writing code takes double the time when you also have to write the tests

          this time is more than made up for by the usual subsequent loss of debugging, refactoring and maintenance time, in my experience, at least for anything actively being used and updated

          • tsimionescu 587 days ago
            Yes, if you were right about the requirements, even if they weren't well specified. But if it turns out you implemented the wrong thing (either because the requirements simply changed for external reasons, or because you missed some fundamental aspect), then you wouldn't have had to debug, refractor or maintain that initial code, and the initial tests will probably be completely useless even if you end up salvaging some of the initial implementation.
            • twic 587 days ago
              No, that's a separate issue, that eschewing TDD doesn't help you with.

              With TDD, the inner programming loop is:

              1. form a belief about requirements

              2. write a test to express that belief

              3. write code to make that test pass

              Without TDD, the loop is:

              1. form a belief about requirements

              2. write code to express that belief

              3. futz around with manual testing, REPLs, and after-the-fact testing until you're sufficiently happy that the code actually does express that belief

              And in my experience, the former loop is faster at producing working code.

              • ipaddr 587 days ago
                It usually works out like..

                  form a belief about a requirement
                  write a test
                  test fails
                  write code
                  test fails
                  add debug info to code
                  test fails no debug showing
                  call code directly and see debug code
                  change assert
                  test fails
                  rewrite test
                  test succeed
                  output test class data.. false 
                  positive checking null equals null
                  rewrite test
                  test passes
                  forget original purpose and stare at green passing tests with pride.
                • xxs 587 days ago
                  > add debug info to code

                  On a more serious note: just learn to use a debugger, and add asserts, if need be. To me TDD only helps having something that would run your code - but that's pretty much it. If you have other test harness options, I fail to see the benefits outside conference talks and books authoring.

                  • pmarreck 579 days ago
                    my professional opinion is that having to resort to a debugger is a bad-design, bad-testing code smell
              • laserlight 587 days ago
                Yes, so much this. I don’t really understand how people could object to TDD. It’s just about putting together what one manually does otherwise. As a bonus, it’s not subject to biases because of after-the-fact testing.
                • pjmlp 587 days ago
                  Test the belief of recovery from a network split in distributed commit.
                  • laserlight 587 days ago
                    I don't get the point. Is it something not testable? If it's testable, it's TDD-able.
                    • pjmlp 587 days ago
                      TDD sales pitch is not to write any code without an existing test.
              • minimeme 587 days ago
                That's my experience also! It's all about faster feedback and confidence the tests provide.
          • thrwyoilarticle 587 days ago
            >at least for anything actively being used and updated

            This implies that the strength of the tests appears when it's modified?

            Like the article says, TDD doesn't own the concept of testing. You can write good tests without submitting yourself to a dogma of red/green, minimum-passing (local-maximum-seeking) code. Debating TDD is tough because it gets bogged down with having to explain how you're not a troglodyte who writes buggy untested code.

            And - on a snarkier note - this is a better argument against dynamic typing than for TDD.

          • wenc 587 days ago
            In theory, I agree. In practice, at least for my projects, the results are mixed.
        • dathanb82 587 days ago
          I can't remember the last time the speed at which I could physically produce code was the bottleneck in a project. It's all about design and thinking through and documenting the edge cases, and coming up with new edge cases and going back to the design. By the time we know what we're going to write, writing the code isn't the bottleneck, and even if it takes twice as long, that's fine, especially since I generally end up designing a more usable interface as a result of using it (in my tests) as it's being built.
        • 1123581321 587 days ago
          The automaker analogy is a better fit for the “practice” of not handling errors on the assumption a function can’t return an unexpected value.

          TDD is actually quite good at manufacturing methods to reasonable tolerance, which the Japanese did require.

          Higher level tests ensure the functional output is correct and typically don’t have built in any reliance on unit tests.

        • majikandy 587 days ago
          > The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.

          The times I have believed this myself, often turned out to be wrong when the full cost of development was taken into account. And I came back to the code later wishing I had tests around it. So you end up TDDing only the bug fix and exercising that part of the code with the failing test and then the code correction.

        • ParetoOptimal 587 days ago
          > The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.

          That was the time it took to actually write working code for that feature.

          The version of "working code" that took 50% as long was just a con to fool people into thinking you'd finished until they move onto other things and a "perfectly acceptable" regression is discovered.

          • discreteevent 587 days ago
            The reason someone is iterating fast is usually because they are trying to discover the best solution to a problem by building things. Once they have found this then they can write "working code". But they don't want to have to write tests for all the approaches that didn't work and will be thrown away after the prototyping phase.
      • tsimionescu 587 days ago
        There are two problems I've seen with this approach. One is that sometimes the feature you implemented and tested turns out to be wrong.

        Say, initially you were told "if I click this button the status should update to complete", you write the test, you implement the code, rinse and repeat until a demo. During the demo, you discover that actually they'd rather the button become a slider, and it shouldn't say Complete when it's pressed, it should show a percent as you pull it more and more. Now, all the extra care you did to make sure the initial implementation was correct turns out to be useless. It would have been better to have spent half the time on a buggy version of the initial feature, and found out sooner that you need to fundamentally change the code by showing your clients what it looks like.

        Of course, if the feature doesn't turn out to be wrong, then TDD was great - not only is your code working, you probably even finished faster than if you had started with a first pass + bug fixing later.

        But I agree with the GP: unclear and changing requirements + TDD is a recipe for wasted time polishing throw-away code.

        Edit: the second problem is well addressed by a sibling comment, related to complex interactions.

        • generalk 587 days ago

            > Say, initially you were told "if I click this button the status should  
            > update to complete", you write the test, you implement the code, rinse and 
            > repeat until a demo. During the demo, you discover that actually they'd 
            > rather the button become a slider, and it shouldn't say Complete when it's 
            > pressed, it should show a percent as you pull it more and more. Now, all the 
            > extra care you did to make sure the initial implementation was correct turns 
            > out to be useless.
          
          Sure, this happens. You work on a thing, put it in front of the folks who asked for it, and they realize they wanted something slightly different. Or they just plain don't want the thing at all.

          This is an issue that's solved by something like Agile (frequent and regular stakeholder review, short cycle time) and has little to do with whether or not you've written tests first and let them guide your implementation; wrote the tests after the implementation was finished; or just simply chucked automated testing in the trash.

          Either way, you've gotta make some unexpected changes. For me, I've really liked having the tests guide my implementation. Using your example, I may need to have a "percent complete" concept, which I'll only implement when a test fails because I don't have it, and I'll implement it by doing the simplest thing to get it to pass. If I approach it directly and hack something together I run the risk of overcomplicating the implementation based on what I imagine I'll need.

          I don't have an opinion on how anyone else approaches writing complex systems, but I know what's worked for me and what hasn't.

      • andix 587 days ago
        TDD usually means that you write the tests before writing the code.

        Writing tests as you write the code is just regular and proper software development.

        • patcon 587 days ago
          Respectfully, I think the distinction they're making it that "writing ONE failing test then the code to pass it" is very different than "write a whole test suite, and then write the code to pass it".

          The former is more likely to adapt to the learning inherent in the writing of code, which someone above mentioned was easy to lose in TDD :)

        • Spivak 587 days ago
          Odd, I was taught TDD as

          1. Write test, see that it fails the way you expect.

          2. Write code that makes the test pass.

          3. Write test...

          and be secure that you can fearlessly refactor and not backslide while you play with different ideas so long as all your tests stay green.

          I would get overwhelmed so fast if I just had 50 failing tests and no implementation.

          • imran-iq 587 days ago
            That's the right way to do TDD, see this talk: https://www.youtube.com/watch?v=EZ05e7EMOLM

            One of the above comments mentions BDD as a close cousin to TDD, but that is wrong as TDD is actually BDD as you should only be testing behaviours, which allow you to "fearlessly refactor"

          • thrwyoilarticle 587 days ago
            I don't think TDD gets to own the concept of having a test for what you're refactoring. That's just good practice & doesn't require that you make it fail first.
          • pjmlp 587 days ago
            Now do that for rendering a rotating cube in Vulkan with pbr shading.
            • Spivak 587 days ago
              This falls under the category of problems where verifying, hell describing, the result is harder than the code to produce it.

              Here’s how I would do it. The challenge is that the result can’t be precisely defined because it’s essentially art. But with TDD the assertions don’t actually have to actually live in code. All we have to do is make incremental verifiable progress that lets us fearlessly make changes.

              So I would set up my viewport as a grid where in each square there will eventually live a rendered image or animation. The first one blank, the second one a dot, the third a square, the fourth with color, the fifth a rhombus, the sixth with two disjoint rhombuses …

              When you’re satisfied with each box you copy/paste the code into the next one and work on the next test always rendering the previous frames. So you can always reference all the previous working states and just start over if needed.

              So the TDD flow becomes

              1. Write down what you want the result of the next box to look like.

              2. Start with the previous iteration and make changes until it looks like what you wanted.

              3. Write down what you want…

              • kqr 587 days ago
                Using wetware test oracles is underappreciated. You can't do it in a dumb way, of course, but with a basic grasp of statistics and hypothesis testing you can get very far with sprinkles of manual verification of test results.

                (Note: manual verification is not the same as manual execution!)

                • pjmlp 587 days ago
                  That isn't what TDD preaches.
                  • kqr 586 days ago
                    TDD also underappreciates the ROI of that.
              • pjmlp 587 days ago
                TDD sales pitch it not to write any code without a test for it, that fails it.
                • Spivak 586 days ago
                  And that’s happening. The next test is “the 8th box should contain a rhombus slowly rotating clockwise” and it’s failing because the box is currently empty. So now you write code.
    • twic 587 days ago
      No, this is nonsense. You don't write the test coverage up front!

      You think of a small chunk of functionality you are comfident about, write the tests for that (some people say just one test, i am happy with up to three or so), then write the implementation that makes those tests pass. Then you refactor. Then you pick off another chunk and 20 GOTO 10.

      If at some point it turns out your belief about the functionality was wrong, fine. Delete the tests for that bit, delete the code for it, make sure no other tests are broken, refactor, and 20 GOTO 10 again.

      The process of TDD is precisely about writing code when you don't know how the program is going to work upfront!

      On the other hand, implementing a well-defined spec is when TDD is much less useful, because you have a rigid structure to work to in both implementation and testing.

      I think the biggest problem with TDD is that completely mistaken ideas about it are so widespread that comments like this get upvoted to the top even on HN.

      • switchbak 587 days ago
        I feel like I'm in crazy town in this thread. Most of the replies seem to be misunderstanding the intent of TDD, and yours is one of the few that gets it right.

        Is general understanding of TDD really that far off the mark? I had no idea, and I've been doing this for essentially 2 decades now.

        • pjmlp 587 days ago
          No we really understand that the whole religion of only writing code after a failing test only applies to niche cases of libraries for headless applications in monoliths.

          Lets say my designer came to me and now wants bump mapped textures on engine, well I cannot touch that compute shader without writing a test first, so suggestions for a TDD framework for GPU shaders.

          • pydry 587 days ago
            Ive done a form of TDD where I have a test scenario set up that generates a picture and I tweak the code until the picture looks right and then "freeze" it.

            Once frozen the test compares the picture (or other artefact) against the generated picture (or artefact).

            Im not sure if it'd be useful in your situation though.

            • pjmlp 587 days ago
              Now apply that to a full native GUI application, while keeping in synch with UI/UX requirements and the rule of no code without failing tests.
              • pydry 587 days ago
                Yeah, Ive kind of done this.

                The code changes that changed the UI - even changing some CSS - would cause a screenshot comparison failure on certain steps in the test. If it is what was expected then we overwrote the old screenshot with a new one.

                It isnt exactly the same as the TDD process coz sometimes you write the code first (e.g. CSS), eyeball it and then rebuild the screenshot if it looks correct.

                I'd say it's close enough though.

                I wont pretend it worked perfectly. The screenshot comparison algorithms were sometimes flaky and UIs can be surprisingly non-determinstic and you need to have a process to pull the database and update screenshots accordingly. However, it's the approach I'd prefer to take in the future (I havent done UIs for about 3 years now).

                I also wasnt religious about covering every single scenario with tests but I could have been. The company moved fast and sometimes they just wanted quick and not reliable.

                • pjmlp 587 days ago
                  That is the whole deal, it was Web, so kind of easier than native, given its source based form, and even then, you had to bend the rules from TDD religion.
                  • switchbak 583 days ago
                    I think you're being a little uncharitable to the TDD folks here. Sure the early writing was very dogmatic, but real world TDD doesn't seem to be to be as rigid as you describe.

                    Or perhaps you've worked with some real TDD zealots, that doesn't sound like fun.

                    The folks I've worked with use these as guiding recommendations, not binding dictates.

                    For some of the UI stuff you mentioned elsewhere, I've seen a stronger focus on testing not just business logic, but view logic as well (where possible), but generally not to the degree of testing the rendered output of the UI. Maybe that's a thing somewhere, but I haven't personally seen it.

                  • pydry 587 days ago
                    The same sort of thing should be possible for various kinds of native too. You'll need a selenium-esque library that can interact with UIs and take screenshots in your environment.

                    But yeah, if you dont have one of those tools or its super unreliable or it's only available in a language you cant use then you cant do this.

                    I dont really consider this to be bending the rules of TDD. It's more like next gen TDD IMO.

    • shados 587 days ago
      The big issue I see when people have trouble with TDD is really a cultural one and one around the definition of tests, especially unit tests.

      If you're thinking of unit tests as the thing that catches bugs before going to production and proves your code is correct, and want to write a suite of tests before writing code, that is far beyond the capabilities of most software engineers in most orgs, including my own. Some folks can do it, good for them.

      But if you think of unit tests as a way to make sure individual little bits of your code work as you're writing them (that is, you're testing "the screws" and "the legs" of the tables, not the whole table), then it's quite simple and really does save time, and you certainly do not need full specs or even know what you're doing.

      Write 2-3 simple tests, write a function, write a few more tests, write another function, realize the first function was wrong, replace the tests, write the next function.

      You need to test your code anyway and type systems only catch so much, so even if you're the most agile place ever and have no idea how the code will work, that approach will work fine.

      If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

      Do you have a useful test suite to do regression testing at the end? Absolutely not! In the analogy, if you have tests for a screw attaching the leg of a table, and you change the type of legs and the screws to hook them up, of course the tests won't work anymore. What you have is a set of disposable but useful specs for every piece of the code though.

      You'll still need to write tests to handle regressions and integration, but that's okay.

      • Scarblac 587 days ago
        And I think most people who don't write tests in code work that way anyway, just manually -- they F5 the page, or run the code some other way.

        But the end result of writing tests is often that you create a lot of testing tied to what should be implementation details of the code.

        E.g. to write "more testable" code, some people advocate making very small functions. But the public API doesn't change. So if you test only the internal functions, you're just making it harder to refractor.

        • cogman10 587 days ago
          > But the end result of writing tests is often that you create a lot of testing tied to what should be implementation details of the code.

          This is the major issue I have with blind obedience to TDD.

          It often feels like the question of "What SHOULD this be doing" isn't asked and instead what you end up with is a test suite that answers the question "What is this currently doing?"

          If refactoring code causes you to refactor tests, then your tests are too tightly coupled to implementation.

          Perhaps the missing step to TDD is deleting or refactoring the test at the end of the process so you better capture intent rather than the flow of consciousness.

          Example: I've seen code that had different code paths to send in a test "logger" to ensure the logger was called at the right locations and said the right messages. That made it difficult add new information to the logger or add new logger messages. And for what?

          • viceroyalbean 587 days ago
            If your goal is to avoid coupling tests to implementation then TDD seems like the most obvious strategy. You write the test before the implementation, so it is much harder to end up with the coupling than other strategies.
        • sanderjd 586 days ago
          I often TDD little chunks of code, end up deciding they make more sense inside larger methods, and delete the tests. But that's ok, the test was still useful to help me develop the chunk of code.
      • 0x457 587 days ago
        Many people have a wrong perception of TDD. The main idea is to break a large, complicated thing into many small ones until there is nothing left, like you said.

        You're not supposed to write every single test upfront, you write a tiny test first. Then you add more and refactor your code, repeat until there is nothing left of that large complicated thing you were working on.

        There are also people who test stupid things and 3rd party code in their tests and either they get a fatigue from it and/or think their tests are well written.

        • pjmlp 587 days ago
          How do you break those testes for a ray tracing algorithm on the GPU?
          • richbradshaw 587 days ago
            Probably start with “when code is run make sure GPU is utilised”.
            • pjmlp 587 days ago
              Don't write any GPU code without a test....
      • thrwyoilarticle 587 days ago
        >If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

        The raison d'etre of TDD is that developers can't be trusted to write tests that pass for the right reason - that they can't be trusted to write code that isn't buggy. Yet it depends on them being able to write tests with enough velocity that they're cheap enough to dispose?

      • sanderjd 586 days ago
        Yep, TDD for little chunks of code is really nice, I think of it like just a more structured way to trying things out in a repl as you go (and it works for languages without repls). Even if you decide to never check the test in because the chunk of code ended up being too simple for a regression test to be useful, if it was helpful in testing assumptions while developing the code, that's great.

        But yeah, trying to write all the tests for a whole big component up front, unless it's for something with a stable spec (eg. I once implemented some portions of the websockets spec in servo, and it was awesome to have an executable spec as the tests), is usually an exercise in frustration.

    • larschdk 587 days ago
      I think we should try and separate exploration from implementation. Some of the ugliest untestable code bases I have worked with have been the result of some one using exploratory research code for production. It's OK to use code to figure out what you need to build, but you should discard it and create the testable implementation that you need. If you do this, you won't be writing tests up front when exploring the solution space, but you will be when doing the final implementation.
      • codereviewed 587 days ago
        Have you ever had to convince a non-technical boss or client that the exploratory MVP you wrote and showed to them working must be completely rewritten before going into production? I tried that once when I attempted to take us down the TDD route and let me tell you, that did not go over well.

        People blame engineers for not writing tests or doing TDD when, if they did, they would likely be replaced with someone who can churn out code faster. It is rare, IME, to have culture where the measured and slow progress of TDD is an acceptable trade off.

        • lanstin 587 days ago
          Places where software is carrying a great deal of value tend to be more like that. That is, if mistakes can cost $20,000 / hour or so, then even the business will back down on the push now vs. be sure it works debate.

          As always, the job of a paid software person is to merge what the product people want with what good software quality requires (and what power a future version will unleash). Implement valuable things in software in a way that makes the future of that software better and more powerful.

      • is0tope 587 days ago
        I've always favored exploration before implementation [1]. For me TDD has immense benefit when adding something well defined, or when fixing bugs. When it comes to building something from scratch i found it to get in the way of the iterative design process.

        I would however be more amenable to e.g. Prototyping first, and then using that as a guide for TDD. Not sure if there is a name for that approach though. "spike" maybe?

        [1] https://www.machow.ski/posts/galls-law-and-prototype-driven-...

        • tra3 587 days ago
          I find that past a certain size, even exploratory code base benefits from having tests. Otherwise, as I'm hacking, I end up breaking existing functionality. Then I spend more time debugging trying to figure out what changed.. what's your experience when it comes to more than a few hundred lines of code?
          • is0tope 587 days ago
            Indeed, but once you start getting to that point I'd argue you are starting to get beyond a prototype. But you raise a good point, id say if the intention is to throw the code away (which you probably should) then if add as few tests as will allow you to make progress.
      • andix 587 days ago
        Most projects don’t have the budget to rewrite the code, once it is working.
        • TheCoelacanth 587 days ago
          Most project don't have the budget not to rewrite the code.
      • gabereiser 587 days ago
        I think this is the reasonable approach I take. It's ok to explore and figure out the what. Once you know (or the business knows) then it's time to write a final spec and test coverage. In the end, the mantra should be "it's just code".
      • happytoexplain 587 days ago
        This makes sense, but I think many (most?) pipelines don't allow for much playtime because they are too rigid and top-down. At best you will convince somebody that a "research task" is needed, but even that is just another thing you have to get done in the same given time frame. Of course this is the fault of management, not of TDD.
    • silversmith 587 days ago
      > The trouble with TDD is that quite often we don't really know how our programs are going to work

      Interesting - for me, that's the only time I truly practice TDD, when I don't know how the code is going to work. It allows me to start with describing the ideal use case - call the API / function I would like to have, describe the response I would expect. Then work on making those expectations a reality. Add more examples. When I run into a non-trivial function deeper down, repeat - write the ideal interface to call, describe the expected response, make it happen.

      For me, TDD is the software definition process itself. And if you start with the ideal interface, chances are you will end up with something above average, instead of whatever happened to fall in place while arranging code blocks.

    • BurningFrog 587 days ago
      Agile, as the name hints, was developed precisely to deal with ever changing requirements. In opposition to various versions of "first define the problem precisely, then implement that in code, and then you're done forever".

      So the TDD OP describes here is not an Agile TDD.

      The normal TDD process is:

          1. add one test
          2. make it (and all others) pass
          3. maybe refactor so code is sane
          4. back to 1, unless you're done.
      
      When requirements change, you go to 1 and start adding or changing tests, iterate until you're done.
      • tra3 587 days ago
        Exactly. Nobody's on board with paying at least twice as much for software though. But that's what you get when things change and you have to refactor BOTH your code AND your tests.
        • mikkergp 587 days ago
          But what is your process for determining code is correct, and is it really faster and more reliably than writing tests? Sheer force of will? Running it through your brain a few times? Getting peer review? I often find tests that all things being equal tests are just the fastest way to review my own work, even if I hate writing them sometimes.
          • karmelapple 587 days ago
            Tests are literally where our requirements live. To not have automated tests would be to not have well-defined requirements.
            • cogman10 587 days ago
              To have automated tests does not mean you have well-defined requirements.

              I 100% agree with capturing requirements in tests. However, I argue that TDD does not cause that to happen.

              I'd even make a stronger statement. Automated tests that don't capture a requirement should be deleted. Those sorts of tests only serve to hinder future refactoring.

              A good test for a sort method is one that verifies data is sorted at the end of it. A bad test for a sort method is one that checks to see what order elements are visited in the sorting process. I have seen a lot of the "element order visit" style tests but not a whole lot of "did this method sort the data" style tests.

              • BurningFrog 587 days ago
                As a semi-avid TDD:er I agree about what's a good test.

                I don't see the connection between TDD and your bad tests examples though.

                I would test a sort method just the way you describe, using TDD.

                • cogman10 586 days ago
                  Imagine that in the process of implementing the sort method, you decide "I'm going to use a heap sort".

                  So, you say "Ok, I'll need a `heapify` method and a `siftDown` methods, so I'm going to write the tests to make sure both of those are working properly". But remember, we started this discussion saying "we need a sort method". So now, if you decide "You know what, heapsort is garbage, let's do tim sort instead!" Now, all the sudden you've got a bunch of useless tests. In the best case, you can simply delete those tests, but often devs often get intimidated deleting such tests "What if something else needs the `heapify` method"?

                  And that's exactly the problem I was pointing out with the example. We started the convo saying "tests couple implementation" and that's what's happened here. Our tests are making it seem like heap sort is the implementation we should use when we started this convo, we just needed a sorting method.

                  But now imagine we are talking about something way more complicated and/or less well known than a sorting algorithm. Now, it becomes a lot harder to sift out which tests are for implementation things and which are for requirements things. Without deleting the "these tests make sure I did a good implementation" future maintainers of the code are left to guess on what's a requirement and what's an implementation detail.

                  • BurningFrog 583 days ago
                    This is very strange and/or confused.

                    All sort methods can and should be tested with the same tests, that assert that unsorted input is converted to sorted output.

                    Choosing heapsort for performance reasons should normally not affect your test suite at all.

                  • MockObject 583 days ago
                    You don't write unit tests for heap sort, you write them for sort. Then you get them to pass using heap sort. Later, you replace heap sort with tim sort, and you can write it quickly and with confidence, because the test suite shows you when you've succeeded.
            • P5fRxh5kUvp2th 587 days ago
              undefined
          • thrwyoilarticle 587 days ago
            You don't need to be doing TDD to be writing tests!
        • 0x457 587 days ago
          To be fair, you have to refactor your code and tests when things change anyway, regardless of the order they were written.
        • randomdata 587 days ago
          Public interfaces should change only under extreme circumstances, so needing to refactor legacy tests should be a rare event. Those legacy tests will help ensure that your public interface hasn't changed as it is extended to support changing requirements. You should not be testing private functions, leaving you free to refactor endlessly behind the public interface. What goes on behind the public interface is to be considered a black box. The code will ultimately be tested by virtue of the public interface being tested.
          • kcb 587 days ago
            Assuming any piece of code won't or shouldn't be changed feels wrong. If you're a library developer you have to put processes in place to account for possible change. If not those public interfaces are just as refactorable as any other code imo. Nothing would be worse than not being able to implement a solution in the best manner because someone decided on an interface a year ago and enshrined it in unit tests.
            • randomdata 587 days ago
              Much, much worse is users having to deal with things randomly breaking after an update because someone decided they could make it better.

              That's not to say you can't seek improvement. The public interface can be expanded without impacting existing uses. If, for example, an existing function doesn't reflect your current view of the world add a new one rather than try to jerry-rig the old one. If the new solutions are radically different such that you are essentially rewriting your code from scratch, a clean break is probably the better route to go.

              If you are confident that existing users are no longer touching your legacy code, remove it rather than refactor it.

          • __ryan__ 587 days ago
            Oh, I didn't consider this. Problem solved then.
        • AnimalMuppet 587 days ago
          But how much do you want to pay for bugs?

          Things change. You change the code in response. What broke? Without the tests, you don't know.

          "Things change" include "you fixed a bug". Bug fixes can create new bugs (the only study I am familiar with says 20-50% probability). Did your bug fix break anything else? How do you know? With good test coverage, you just run the tests. (Yes, the tests are never complete enough. They can be complete enough that they give fairly high confidence, and they can be complete enough to point out a surprising number of bugs.)

          Does that make you pay "at least twice"? No. It makes you pay, yes, but you get a large amount of value back in terms of actually working code.

        • ivan_gammel 587 days ago
          That can be an acceptable risk actually and it does quite often. There are two conceptually different phases in SDLC: verification that proves implementation is working according to spec and validation that proves that spec matches business expectations. Automated tests work on first phase, minimizing the risk that when reaching the next phase we will be validating the code that wasn’t implemented according to spec. If that risk is big enough, accepting the refactoring costs after validation may make a lot of sense.
        • no_wizard 587 days ago
          Is it twice as much? I think unsound architectural practices in software is the root cause of this issue, not red green refactor.

          You aren't doing "double the work" even though it seems that way on paper, unless the problem was solved with brittle architectural foundations and tightly coupled tests.

          At the heart of this problem is most developers don't quite grasp boundary separation intuitively I think.

        • Gibbon1 587 days ago
          Friend of mine says when he has code he doesn't want over eager jr dev's 'refactoring' he writes a couple of Byzantine unit tests to guard it.
        • BurningFrog 587 days ago
          The only way to avoid that is to not have tests.
      • pjmlp 587 days ago
        Add one test for GUI code.....
    • Alex3917 587 days ago
      > The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them

      Even if you know exactly how the software is going to work, how would you know if your test cases are written correctly without having the software to run them against? For that reason alone, the whole idea of TDD doesn't even make sense to me.

      • rcxdude 587 days ago
        One reason why TDD can be a good idea is the cycle involves actually testing the test cases: if you write the test, run it and see that it fails, then write the code, then run the test again and see that it succeeds, you can have some confidence the test is actually testing something (not necessarily the right thing, but at least something). Wheras if you're writing the test after writing the code and expect that it will succeed the first time you run it, it's quite possible to write a test which doesn't actually test anything and will always succeed. (There are other techniques like mutation testing which may get you a more robust indication that your tests actually depend on the state of your software, but I've rarely seem them used in practice).
        • dgunay 587 days ago
          Good point. Sometimes I cheat and implement before I test, but often when I do that I'll comment & uncomment the code that actually does the thing so I can see the tests go from red to green.

          Have been meaning to try mutation testing as a way of sussing out tests that cover lines but don't actually test behavior.

          • majikandy 587 days ago
            Snap, me too, amazing how many times it catches a simple mistake too like a < instead of > that you were certain you got the right way round as you wrote it.
        • pmarreck 587 days ago
          This is a great point. You're literally testing the test validity as you go.
      • Byamarro 587 days ago
        Most tests shouldn't be hard to read and reason about so it shouldn't be a problem. In case of more complex tests, you can do it like you would do it during iterative development - debug tests and code to figure out what's wrong - nothing changes here.
      • majikandy 587 days ago
        It’s funny that in your paragraph there I thought you were about to write… “for that reason alone, TDD is the only way that makes sense to me.”

        The reason is, the tests and the code are symbiotic, your tests prove the code works and your code proves the tests are correct. TDD guarantees you always have both of those parts. But granted it is not the only way to get those 2 parts.

        You can still throw into the mix times when a bug is present, and it is this symbiotic relationship that helps you find the bug fast, change the test to exercise the newly discovered desired behaviour, see the test go red for the correct reason and then make the code tweak to pass the test (and see all the other tests still pass).

      • regularfry 587 days ago
        Because the test you've just written (and only that test) fails in the way you expect when you run the suite.
        • twic 587 days ago
          And then passes when you write the code that you think should make it pass. You do catch bugs in the tests, as well as bugs in the implementation!
    • f1shy 587 days ago
      This is exactly my problem with TDD. Note this problem is not only in SW. For any development you do, you could start with designing tests. You can do for some HW for sure. If you want to apply TDD to any other development, you see pretty fast, what the problem is: you are going to design lots of tests, that a at the end will not be used. A total waste. Also with TDD often it will be centered in quantity of tests and not so much quality.

      What I find is much much better approach is what I call "detached test development" (DTD). The idea is: 2 separate teams get the requirements; one team writes code, the other write tests. They do not talk to each other! Fist when a test is not passed, they have to discuss: is the requirement not clear enough? What is the part that A thought about, but not B? Assignment of tests and code can be mixed, so a team makes code for requirements 1 through 100, and tests for 101 to 200, or something like that. I had very very good results with such approach.

      • switchbak 587 days ago
        Who starts with designing just the tests? I have no idea how this is an association with TDD.

        TDD is a feedback cycle, you write small increments of tests before writing a small bit of a code. You don't write a bunch of tests upfront, that'd be silly. The whole point is to integrate small amounts of learning as you go, which help guide the follow-on tests, as well as the actual implementation, not to mention questions to need to ask the broader business.

        Your DTD idea has been tried a lot in prior decades. In fact, as a student I was on one of those testing teams. It's a terrible idea, throwing code over a wall like that is a great way to radically increase the latency of communication, and to have a raft of things get missed.

        I have no idea why there's such common misconceptions of what TDD is. Maybe folks are being taught some really bad ideas here?

      • EddySchauHai 587 days ago
        > Also with TDD often it will be centered in quantity of tests and not so much quality.

        100%. Metrics of quality are really really hard to define in a way that are both productive and not gamified by engineers.

        > What I find is much much better approach is what I call "detached test development" (DTD)

        I'm a test engineer and some companies do 'embed' an SDET like the way you mention within a team - it's not quite that clear cut, they can discuss, but it's still one person implementing and another testing.

        I'm always happy to see people with thoughts on testing as a core part of good engineering rather than an afterthought/annoyance :)

      • ivan_gammel 587 days ago
        What you described is a quite common role of QA automation team, but it does not really replace TDD. Separate team working on a test can do it only relying on a remote contract (e.g. API, UI or database schema), they cannot test local contracts like a public interface of a class, because that would require the that code already to be written. In TDD you often write the code AND the test at the same time, integrating the test and the code in compile time.
      • thrwyoilarticle 587 days ago
        >2 separate teams get the requirements; one team writes code, the other write tests.

        This feels a bit like when you write a layer of encapsulation to try to make a problem easier only to discover that all of the complexity is now in the interface. Isn't converting the PO's requirements into good, testable requirements the hard technical bit?

    • no_wizard 587 days ago
      thats kind of TDDs core point. you don't really know upfront, so you write tests to validate what you can define up front, and through that, you should find you discover other things that were not accounted for, and the cycle continues, until you have a working system that satisfies the requirements. Then all those tests serve as a basic form of documentation & reasonable validation of the software so when further modifications are desired, you don't break what you already know to be reasonably valid.

      Therefore, TDD's secret sauce is in concretely forcing developers to think through requirements, mental models etc. and quantify them in some way. When you hit a block, you need to ask yourself whats missing, then figure out, and continue onward, making adjustments along the way.

      This is quite malleable to unknown unknowns etc.

      I think the problem is most people just aren't chunking down the steps of creating a solution enough. I'd argue that the core way of approaching TDD fights most human behavioral traits. It forces a sort of abstract level of reasoning about something that lets you break things down into reasonable chunks.

      • pjmlp 587 days ago
        I doubt ZFS authors would have succeeded designing it with TDD.
        • no_wizard 587 days ago
          What's inherent about this problem that wouldn't benefit from chunking things into digestible, iterative parts that lend themselves nicely to the TDD approach as I described?
          • pjmlp 587 days ago
            Don't write any code without tests doesn't provide a path to data structure design, or device drivers infrastructure.
            • no_wizard 585 days ago
              What about behaviors and expectations? At the end of the day you’re verifying behaviors and expectations of software. I mean when designing a solution you need to think these things through. TDD compliments this just the same
    • mcv 587 days ago
      Exactly. I use TDD in situations where it fits. And when it does, it's absolutely great. But there are many situations where it doesn't fit.

      TDD is not a silver bullet, it's one tool among many.

      • majikandy 587 days ago
        I find it as close as I have ever found to a silver bullet.
    • yoden 587 days ago
      > test coverage gets in the way of the iterative design process. In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

      So much of this is because TDD has become synonymous with unit testing, and specifically solitary unit testing of minimally sized units, even though that was often not the original intent of the ideators of unit testing. These tests are tightly coupled to your unit decomposition. Not the unit implementation (unless they're just bad UTs), but the decomposition of the software into which units/interfaces. Then the decomposition becomes very hard to change because the tests are exactly coupled to them.

      If you take a higher view of unit testing, such as what is suggested by Martin Fowler, a lot of these problems go away. Tests can be medium level and that's fine. You don't waste a bunch of time building mocks for abstractions you ultimately don't need. Decompositions are easier to change. Tests may be more flaky, but you can always improve that later once you've understood your requirements better. Tests are quicker to write, and they're more easily aligned with actual user requirements rather than made up unit boundaries. When those requirements change, it's obvious which tests are now useless. Since tests are decoupled from the lowest level implementation details, it's cheap to evolve those details to optimize implementation details when your performance needs change.

    • eyelidlessness 587 days ago
      > The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them, and often make design choices iteratively as we start to realize how our software should behave.

      This is a trouble I often see expressed about static types. And it’s an intuition I shared before embracing both. Thing is, embracing both helped me overcome the trouble in most cases.

      - If I have a type interface, there I have the shape of the definition up front. It’s already beginning to help verify the approach that’ll form within that shape.

      - Each time I write a failing test, there I have begun to define the expected behavior. Combined with types, this also helps verify that the interface is appropriate, as the article discusses, though not in terms of types. My point is that it’s also verifying the initial definition.

      Combined, types and tests are (at least a substantial part of) the definition. Writing them up front is an act of defining the software up front.

      I’m not saying this works for everyone or for every use case. I find it works well for me in the majority of cases, and that the exception tends to be when integrating with systems I don’t fully understand and which subset of their APIs are appropriate for my solution. Even so writing tests (and even sometimes types for those systems, though this is mostly a thing in gradually typed languages) often helps lead me to that clarity. Again, it helps me define up front.

      All of this, for what it’s worth, is why I also find the semantics of BDD helpful: they’re explicit about tests being a spec.

    • grepLeigh 587 days ago
      > Unfortunately most software is just not well defined up front.

      This is true, and I think that's why TDD is a valuable exercise to disambiguate requirements.

      You don't need to take an all/nothing approach. Even if you clarify 15-20% of the requirements enough to write tests before code, that's a great place to begin iterating on the murky 80%.

    • ParetoOptimal 587 days ago
      >Unfortunately most software is just not well defined up front.

      Because for years people have practice with defining software iteratively, whether by choice or being forced by deadlines and agile.

      That doesn't inherently make one or the other harder, it's just another familiarity problem.

      TDD goes nicely with top-down design using something like Haskell's undefined to stub out functionality that typechecks and it's where clauses.

          myFunction = haveAParty . worldPeace . fixPoverty $ world
              where worldPeace = undefined
                         haveAParty = undefined
                         fixPoverty = undefined
      
      Iterative designs usually suck to maintain and use because they reflect the organizational structure of your company. That'll happen anyway to an extent, but better abstractions to make future you and future co-workers lives easier are totally worth it.
    • julianlam 587 days ago
      I often come up with test cases (just the cases, not the actual logic) while writing the feature. However I am never in the mood to context switch to write the test, so I'll do the bare minimum. I'll flip over to the test file and write the `it()` boilerplate with the one-line test title and flip back to writing the feature.

      By the time I've reached a point where the feature can actually be tested, I end up with a pretty good skeleton of what tests should be written.

      There's a hidden benefit to doing this, actually. It frees up your brain from keeping that running tally of "the feature should do X" and "the feature should guard against Y", etc. (the very items that go poof when you get distracted, mind you)

      • majikandy 587 days ago
        I seem to remember this being mentioned in the original TDD book. To brain dump that next test scenario title you think of so as to get it out of your head and get back to the current scenario you are trying to make pass. So by the same idea as above, to not context switch between the part of the feature you are trying to get to work.
    • waynesonfire 587 days ago
      jeez, well defined spec? what a weird concept. Instead, we took a complete 180 and all we get are weekly sprints. just start coding, don't spend time understanding your problem. what a terrible concept.
    • vrotaru 587 days ago
      Even for something which is well defined up-front this can of dubious value. Converting an positive integer less than 3000 is well-defined task. Now if you try to write such a program using TDD what do you think will end up with?

      Try it. Write a test for 1, and an implementation which passes that test then for 2, and so on.

      Bellow is something written without any TDD (in Java)

          private static String convert(int digit, String one, String half, String ten)     {
          switch(digit) {
          case 0: return "";
          case 1: return one;
          case 2: return one + one;
          case 3: return one + one + one;
          case 4: return one + half;
          case 5: return half;
          case 6: return half + one;
          case 7: return half + one + one;
          case 8: return half + one + one + one;
          case 9: return one + ten;
          default:
          throw new IllegalArgumentException("Digit out of range 0-9: " + digit);
          }
          }
      
          public static String convert(int n) {
          if (n > 3000) {
          throw new IllegalArgumentException("Number out of range 0-3000: " + n);
          }
      
          return convert(n / 1000, "M", "", "") + 
              convert((n / 100) % 10, "C", "D", "M") +
              convert((n / 10) % 10, "X", "L", "C") +
                      convert(n % 10, "I", "V", "X");
      }
    • AnimalMuppet 587 days ago
      > In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

      If you have "a growing collection of broken tests", that's not TDD. That's "they told us we have to have tests, so we wrote some, but we don't actually want them enough to maintain them, so instead we ignore them".

      Tests help massively with iterating a design on a partly-implemented code base. I start with the existing tests running. I iterate by changing some parts. Did that break anything else? How do I know? Well, I run the tests. Oh, those four tests broke. That one is no longer relevant; I delete it. That other one is testing behavior that changed; I fix it for the new reality. Those other two... why are they breaking? Those are showing me unintended consequences of my change. I think very carefully about what they're showing me, and decide if I want the code to do that. If yes, I fix the test; if not, I fix the code. At the end, I've got working tests again, and I've got a solid basis for believing that the code does what I think it does.

    • mirzap 587 days ago
      > The trouble with TDD is that quite often we don't really know how our programs are going to work

      > The obvious exception to this, where I still use TDD, is when implementing a well defined spec.

      From my understanding (and experience), TDD is quite the opposite. It's most useful when you don't have the spec, don't have clue how software will work in the end. TDD creates the spec, iteratively.

      • pjmlp 587 days ago
        Unless we are talking about any kind of GUI.
    • Buttons840 587 days ago
      When I've been serious about testing I'll usually:

          1. Hack in what I want in some exploratory way
          2. Write good tests
          3. Delete my hacks from step 1, and ensure all my new tests now fail
          4. Re-implement what I hacked together in step 1
          5. Ensure all tests pass
      
      This allows you to explore while still retaining the benefits of TDD.
      • gleenn 587 days ago
        There's a name for it, it's called a "spike". You write a bunch of exploratory stuff, get the idea right, throw it all away (without even writing tests) and then come back doing TDD.
        • Double_a_92 587 days ago
          Also not sure why you are getting downvoted. We call it "breakthrough", as in piercing though all the layers to connect one simple usecase from front to end.

          Once we established that that works properly, we think of a way to do it clean and tested for all other usecases.

          (In the book "the pragmatic programmer" it's called "tracer bullets / code".)

        • majikandy 587 days ago
          Not sure why you got downvotes for this. It is a very effective technique.

          Since typing speed was never the bottleneck in software development. Throwing away the little bit of code you wrote, it not expensive. And writing it back in with TDD is incredibly efficient.

    • karmelapple 587 days ago
      For the software you're thinking about, do you have specific use cases or users in mind? Or are you building, say, an app for the first time, perhaps for a very early stage startup that is nowhere close to market fit yet?

      We typically write acceptance tests, and they have been helpful either early on or later in our product development lifecycle.

      Even if software isn't defined upfront, the end goal is likely defined upfront, isn't it? "User X should be able to get data about a car," or "User Y should be able to add a star ratings to this review," etc.

      If you're building a product where you're regularly throwing out large parts of the UI / functionality, though, I suppose it could be bad. But as a small startup, we have almost never been in that situation over the many years we've been in business.

    • jonstewart 587 days ago
      It's funny, because I feel like TDD -- not just unit-testing, but TDD -- is most helpful when things aren't well-defined. I think back to "what's the simplest test that could fail?" and it helps me focus on getting some small piece done. From there, it snowballs and the code emerges. Obviously it's not always perfect, and something learned along the way spurs refactoring/redesign. That always strikes me as a natural process.

      In many ways I guess I lean maximalist in my practices, and find it helpful, but I'd readily concede that the maximalist advocates are annoying and off-putting. I once had the opportunity to program with Ward Cunningham for a weekend, and it was a completely easygoing and pragmatic experience.

    • bitwize 587 days ago
      And this is why you use spike solutions, to explore the problem space without the constraints of TDD.

      But spikes are written to be thrown away. You never put them into production. Production code is always written against some preexisting test, otherwise it is by definition broken.

    • gregmac 587 days ago
      > it's impossible to write adequate test coverage up front

      I'm not sure what you mean by this. Why are the tests you're writing not "adequate" for the code you're testing?

      If I read into this that you're using code coverage as a metric -- and perhaps even striving for as close to 100% as possible -- I'd argue that's not useful. Code coverage, as a goal, is perhaps even harmful. You can have 100% code coverage and still miss important scenarios -- this means the software can still be wrong, despite the huge effort put into getting 100% coverage and having all tests both correct and passing.

    • jwarden 587 days ago
      I wish I could remember who wrote the essay with the idea of tests as investment in protecting functionality. When after a bit of experimentation or iteration you think you have figured out more or less one part of how your software should behave, then you want to protect that result. It is worth investing in writing and maintaining a test to make sure you don't accidentally break this functionality.

      Functionality based on a set of initial specs and a hazy understanding of the actual problem you are trying to solve might on the other hand might not be worth investing in protecting.

    • majikandy 587 days ago
      It sounds a little like you are trying to write all the tests to the spec up front? With TDD you are still allowed to change design choices as you go and as you realise how you want it to behave. That’s why the tests are one by one. In my experience, TDD carries the most value when you really don’t know where you are going, you write the first test and you start rolling, and somehow you end up at your destination and people think you were good at writing code but actually the code was writing itself in a way as it evolved its was to completeness.
      • Double_a_92 587 days ago
        But often you don't know at all how to best solve a problem, since the solution will probably need to touch many existing code units somehow.

        It might work if you are starting on some new, relatively self-contained feature...

        • majikandy 586 days ago
          Since you don’t know how to solve it, you first write a test that goes vaguely in the direction.

          If you can’t do that yet, you spike, mess around with it, get your bearings and some insight into where you want to go, try a few things out. Then, trash that and then write the first test as above.

          It almost always works, and excels somewhat in legacy code vs just changing things.

          It isn’t about ‘might work’. The TDD paradigm isn’t about finding shiny places where you can use. It is just a nice way of writing clean maintainable software in almost all areas.

          Places where TDD won’t work are usually by exception rather than the norm.

          Often those exceptions are where something already exists and wasn’t put in by TDD because by the definition of TDD you wouldn’t have been able to write that in the first place.

    • quickthrower2 587 days ago
      You can do TDD if you do something managers hate!

      And that is, write code, chuck it away, start again.

      Prototype your feature without TDD. Then chuck it away and build it again with TDD.

      My guess is by doing so code quality and reduced technical debt pay more than what is lost in time.

      Very few companies work like this I imagine: None that I have worked for.

      Since keyboard typing is a short part of software development it is probably a great use of time and could catch more bugs and design quirks early on when they cost $200/h instead of $2000/h.

    • SomeCallMeTim 587 days ago
      That's one issue with TDD. I agree 100% in that respect.

      Another partly orthogonal issue is that design is important for some problems, and you don't usually reach a good design by chipping away at a problem in tiny pieces.

      TDD fanatics insist that it works for everything. Do I believe them that it improved the quality of their code? Absolutely; I've seen tons of crap code that would have benefited from any improvement to the design, and forcing it to be testable is one way to coerce better design decisions.

      But it really only forces the first-order design at the lowest level to be decent. It doesn't help at all, or at least not much, with the data architecture or the overall data flow through the application.

      And sometimes the only sane way to achieve a solid result is to sit down and design a clean architecture for the problem you're trying to solve.

      I'm thinking of one solution I came up with for a problem that really wasn't amenable to the "write one test and get a positive result" approach of TDD. I built up a full tree data structure that was linked horizontally to "past" trees in the same hierarchy (each node was linked to its historical equivalent node). This data structure was really, really needed to handle the complex data constraints the client was requesting. As yes, we pushed the client to try to simplify those constraints, but they insisted.

      The absolute spaghetti mess that would have resulted from TDD wouldn't have been possible to refactor into what I came up with. There's just no evolutionary path between points A and B. And after it was implemented and it functioned correctly--they changed the constraints. About a hundred times. I'm not even exaggerating.

      Each new constraint required about 15 minutes of tweaking to the structure I'd created. And yes, I piled on tests to ensure it was working correctly--but the tests were all after the fact, and they weren't micro-unit tests but more of a broad system test that covered far more functionality than you'd normally put in a unit test. Some of the tests even needed to be serialized so that earlier tests could set up complex data and states for the later tests to exercise, which I understand is also a huge No No in TDD, but short of creating 10x as much testing code, much of it being completely redundant, I didn't really have a choice.

      So your point about the design changing as you go is important, but sometimes even the initial design is complex enough that you don't want to just sit down and start coding without thinking about how the whole design should work. And no methodology will magically grant good design sense; that's just something that needs to be learned. There Is No Silver Bullet, after all.

      • ivan_gammel 587 days ago
        > Another partly orthogonal issue is that design is important for some problems, and you don't usually reach a good design by chipping away at a problem in tiny pieces.

        True, but… you can still design the architecture, outlining the solution for the entire problem, and then apply TDD. In this case your architectural solution will be an input for low level design created in TDD.

        • SomeCallMeTim 586 days ago
          You can't always, though.

          I described a situation where TDD really, really, really wouldn't have worked. The whole structure needed to be developed, or at least 80% of it, before it would have made sense to write any tests--and the actual TDD philosophy would be to write "one small test" and only write exactly as much code as required to satisfy the test.

          The sane approach was to create the entire structure based on the design, and then test it after it was complete as an entire system. Some of the micro-functionality that TDD would have had you test would have become technical debt as a change-detector later when the client changed their specific requirements.

          As I said above, there is no evolutionary path from tiny pieces to the full structure, and TDD requires that you follow such an evolutionary path. If you're writing a bunch of tests and then creating a nontrivial amount of code, then you're following test-first, but not really following TDD. And I question even how valuable that is when you don't necessarily understand what would need to be tested before you've finished implementing the system.

          • ivan_gammel 586 days ago
            I disagree with you here. TDD does require evolutionary path for the entire system, but the minimum unit is a feature that is expected to be fully specified and implemented to pass the first test. You cannot evolve a data structure or an algorithm with TDD, because TDD by original definition allows only refactoring, not re-engineering (I.e. if your feature is addition, writing „return 4“ to pass test2plus2 isn’t meaningful TDD). So in your case properly applied TDD would require fully implemented structure or at least an atomic part of it within the known design to pass the first test (e.g. testing Feistel round function in an encryption algorithm is ok, but you will know the design of the entire algorithm from the start).
    • agumonkey 587 days ago
      I remember early uml courses (based on pre Java / OO languages). They were all about modules and coupling dependencies. Trying to keep them low, and the modules not too defined. It seems that the spirit behind this (at least the only one that make sense to me) is you don't know, so you just want to avoid coupling hard early, leave the room for low cost adaptation while you discover how things will be.
      • ThalesX 587 days ago
        Whenever I start a greenfield frontend for someone they think I’m horrible in the first iteration. I tend to use style attributes and just shove CSS in there, and once I have enough things of a certain type I extract a class. They all love the result but distrust the first step.
    • marcosdumay 587 days ago
      At this point I doubt the existence of well defined specs.

      Regulations are always ambiguous, standards are never followed, and widely implemented standards are never implemented the way the document tells.

      You will probably still gain productivity by following TDD for those, but your process must not penalize too much changes in spec, because it doesn't matter if it's written in Law, what you read is not exactly what you will create.

    • mrjin 587 days ago
      TDD is not really about making designs right but preventing known good logic from being broken unexpectedly and repeatedly.
    • archibaldJ 587 days ago
      Thus spake the Master Programmer: "When a program is being tested, it is too late to make design changes."

      - The Tao of Programming (1987)

    • jiggawatts 587 days ago
      This is precisely my experience also. I loved TDD when developing a parser for XLSX files to be used in a PowerShell pipeline.

      I created dozens of “edge case” sample spreadsheets with horrible things in them like Bad Strings in every property and field. Think control characters in the tab names, RTL Unicode in the file description, etc…

      I found several bugs… in Excel.

    • randomdata 587 days ago
      TDD isn't concerned with how your program works. In fact, implementation details leaking into your tests can become quite problematic, including introducing the problems you speak of. TDD is concerned with describing what your program should accomplish. If you don't know what you want to accomplish, what are you writing code for?
      • giantrobot 587 days ago
        The issue is what you want to accomplish is often tightly coupled with how it is accomplished. In order to have a test for what it needs to contain the context of how.

        As a made up example. The "what" of the program is to take in a bunch of transactions and emit daily summaries. That's a straight forward "what". It however leaves tons of questions unanswered. Where does the data come from and in what format? Is it ASCII or Unicode? Do we control the source or is it from a third party? How do we want to emit the summaries? Printed to a text console? Saved to an Excel spreadsheet? What version of Excel? Serialized to XML or JSON? Do we have a spec for that serialized form? What precision do we need to calculate vs what we emit?

        So the real "what" is: take in transaction data encoded as UTF-8 from a third party provider which lives in log files on the file system without inline metadata then translate the weird date format with only minute precision and lacking an explicit time zone and summarize daily stats to four decimal places but round to two decimal places for reporting and emit the summaries as JSON with dates as ISO ordinal dates and values at two decimal places saved to an FTP server we don't control.

        While waiting for all that necessarily but often elided detail you can either start writing some code with unit test or wait and do no work until you get a fully fleshed out spec that can serve as the basis for writing tests. Most organizations want to start work even while the final specs of the work are being worked on.

        • randomdata 587 days ago
          > Most organizations want to start work even while the final specs of the work are being worked on.

          Is that significant? Your tests can start to answer these unanswered questions before you ever get around to writing implementation. Suppose you thought you wanted to write data in ASCII format. But then you write some test cases and realize that you actually need Unicode symbols. Now you know what your implementation needs to do.

          Testing is the spec. The exact purpose of testing, which in fairness doesn't have the greatest name, is to provide documentation around what the program does. That it is self-verifying is merely a nice side effect. There is no need for all the questions to be answered while writing the spec (a.k.a. tests). You learn about the answers as you write the documentation. The implementation then naturally follows.

          • giantrobot 586 days ago
            > But then you write some test cases and realize that you actually need Unicode symbols. Now you know what your implementation needs to do.

            If your customer decides they want/need ASCII your test is immaterial. The same is true for writing any tests before you've got a meaningful specification. You're just writing code to write code. At that stage it makes more sense to write a scaffold in the general shape of the task than a bunch of tests defining precise but made up requirements.

            Aspirational tests are great if you know exactly what you need to do. They tell you when you get there. If you don't know exactly what you need to do they're just wasted effort that get thrown away with nothing having been gained.

            • randomdata 586 days ago
              > If your customer decides they want/need ASCII your test is immaterial.

              Not at all. First, if your customer doesn't know upfront that ASCII is necessary, showing the customer how the program will work will help them realize that. The lesson gained from your test was useful in getting to get to that point.

              Second, your failed attempt at an interface that provides UTF-8 (or whatever) during exploration is convertible to a negative case that will fail if ASCII isn't used, documenting the customer's requirement for future developers.

              If the customer originally required UTF-8 and years later a changing landscape forced them to require ASCII instead, again your spec can be negated to help you ensure that you made the correct modifications to meet the new requirements.

              > Aspirational tests are great if you know exactly what you need to do.

              Whereas I would argue that TDD isn't all that useful if you know exactly what you need to do. In that case you can simple write the code along with unit/integration/acceptance tests that validate behaviour.

              TDD is about exploring the unknowns and answering questions – showing the customer how the program will function on the outside – before you waste time implementing all the internal details of a full program around UTF-8 only to be told when you demo it that the customer actually requires ASCII, potentially requiring massive rework. The latter is where you waste effort.

      • wvenable 587 days ago
        > If you don't know what you want to accomplish, what are you writing code for?

        Often times you write to find what you want to accomplish. It sounds backwards, perhaps it is backwards, but it's also very human. Without something to show the user, they often have no idea what they want. In fact, people are far better at telling you what's wrong with what's presented to them then enumerating everything they want ahead of time.

        TDD is great but also completely useless for sussing requirements out of users.

        • randomdata 586 days ago
          It does not sound backwards at all. That is what TDD is for: To start writing the visible portions of your program to see how it works and adjust accordingly as user needs dictate. Once your program does what the user wants, all you have to do is backfill the "black box" implementation, using test cases created during the discovery phase to ensure that the "black box" provides what the user came to expect. The scenario you present is exactly what TDD was envisioned for.

          If you know the exact requirements upfront, you don't really need TDD. You can simply write the code to those requirements and, perhaps, add in some automated acceptance testing to help catch mistakes. TDD shines when you are unsure and exploring options.

    • gjadi 587 days ago
      Isn't the issue because we are reluctant to remove stuff? In the same vein as other said we should throw away one or two version of a program before shipping it.

      Maybe we need to learn how to delete stuff that doesn't make sense.

      Get rid of broken test. Get rid of incorrect documentation.

      Don't be afraid to delete stuff to improve the overall program.

    • eitally 587 days ago
      I still remember a project (I was the eng director and one of my team leads did this) where my team lead for a new dev project was given a group of near-shore SWEs + offshore SQA who were new to both the language & RDBMS of choice, and also didn't have any business domain experience. He decided that was exactly the time to implement TDD, and he took it upon himself to write 100% test coverage based on the approved specs, and literally just instructed the team to write code to pass the tests. They used daily stand-ups to answer questions, and weekly reviews to assess themes & progress. It was slow going, but it was a luxurious experience for the developers, many of whom were using pair programming at the time and now found themselves on a project where they had a committed & dedicated senior staffer to actively review their work and coach them through the project (and new tools learnings). I had never allowed a project to be run like that before, but it was one where we had a fairly flexible timeline as long as periodic deliverables were achieved, so I used it as a kind of science project to see how something that extreme would fare.

      The result was that 1) the devs were exceptionally happy, 2) the TL was mostly happy, except with some of the extra forced work he created for himself as the bottleneck, 3) the project took longer than expected, and 4) the code was SOOOOO readable but also very inefficient. We realized during the project that forcing unit tests for literally everything was also forcing a breaking up of methods & functions into much smaller discrete pieces than would have been optimal from both performance & extensibility perspectives.

      It wasn't the last TDD project we ran, but we were far more flexible after that.

      I had one other "science project" while managing that team, too. It was one where we decided to create an architect role (it was the hotness at that time), and let them design everything from the beginning, after which the dev team would run with it using their typical agile/sprint methodology. We ended up with the most spaghetti code of abstraction upon abstraction, factories for all sorts of things, and a codebase that became almost unsupportable from the time it was launched, necessitating v2.0 be a near complete rewrite of the business logic and a lot of the data interfaces.

      The lessons I learned from those projects was that it's important to have experienced folks on every dev team, and that creating a general standard that allows for flexibility in specific architectural/technical decisions will result in higher quality software, faster, than if one is too prescriptive (either in process or in architecture/design patterns). I also learned that there's no such thing as too much SQA, but that's for a different story.

    • hgomersall 587 days ago
      Since I've moved to full time rust I'm finding it much harder to precede the code with tests (ignoring for a moment the maximalist/minimalist discussion). I think it's the because the abstractions can be so powerful that the development process is iterating over high level abstractions. The bit I worry about testing is the business logic, but that in my experience is not something you can test with a trivial unit test, and that test tends to iterate with the design to some extent. Essentially I end up with a series of behavioural tests and an implementation that as far as possible can't take inputs that can be mishandled (through e.g. the newtype pattern, static constraints etc).

      I'm not quite sure what is right or wrong about my approach, but I do find the code tends to work and work reliably once it compiles and the tests pass.

    • lupire 587 days ago
      It's Test Driven Development, not Test Driven Research.

      Very few critics notice this.

      • anonymoushn 587 days ago
        Maybe you disagree with GP about whether one should always do all their research without actually learning about the problem by running code?
    • wodenokoto 587 days ago
      This rings very true for me.

      I write tdd when doing advent of code. And it’s not that I set out to do it or to practice it or anything. It just comes very natural to small, well defined problems.

    • smrtinsert 587 days ago
      I don't see how you can develop anything without at least technical clarity on what the components of your system should do.
    • AtlasBarfed 587 days ago
      Yeah, TDD has way too much "blame the dev" for the usual cavalcade of organizational software process failures.
    • fsdghrth3 587 days ago
      > This ultimately means, what most programmers intuitively know, that it's impossible to write adequate test coverage up front

      Nobody out there is writing all their tests up front.

      TDD is an iterative process, RED GREEN REFACTOR.

      - You write one test.

      - Write JUST enough code to make it pass.

      - Refactor while maintaining green.

      - Write a new test.

      - Repeat.

      I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.

      • Supermancho 587 days ago
        Writing N or 1 tests N times, depending on how many times I have to rewrite the "unit" for some soft idea of completeness. After the red/green 1 case, it necessarily has to expand to N cases as the unit is rewritten to handle the additional cases imagined (boundary, incorrect inputs, exceptions, etc). Now I see that I could have created optimizations in the method and rewrite it again and leverage the existing red/green.

        Everyone understands the idea, it's just a massive time sink for no more benefit than a test-after methodology provides.

        • fsdghrth3 587 days ago
          See my other comment below. I don't recommend doing it all the time specifically because with experience you can often skip a lot of the rgr loop.

          > Everyone understands the idea, it's just a massive time sink for no more benefit than a test-after methodology provides.

          This is not something I agree with. In my experience, when TDD is used you come up with solutions to problems that are better than what you'd come up with otherwise and it generally takes much less time overall.

          Writing tests after ensures your code is testable. Writing your tests first ensures you only have to write your code once to get it under test.

          Again, you don't always need TDD and applying it when you don't need it will likely be a net time sink with little benefit.

      • gjulianm 587 days ago
        > - You write one test.

        > - Write JUST enough code to make it pass.

        Those two steps aren't really trivial. Even just writing the single test might require making a lot of design decisions that you can't really make up-front without the code.

        • User23 587 days ago
          This acts as a forcing function for the software design. That TDD requires you to think about properly separating concerns via decomposition is a feature, not a bug. In my experience the architectural consequences are of greater value than the test coverage.

          Sadly TDD is right up there with REST in being almost universally misunderstood.

          • bluefirebrand 587 days ago
            > Sadly TDD is right up there with REST in being almost universally misunderstood.

            That's a flaw in TDD and REST, not in the universe.

        • 0x457 587 days ago
          The first test could be as simple as method signature check. Yes, you still have to make a design decision here, but you have to make it either way.
          • giantrobot 587 days ago
            Then you need to keep the test and signature in lock step. Your method signature is likely to change as the code evolves. I'm not arguing against tests but requiring them too early generates a lot of extra work.
          • lucumo 587 days ago
            Interesting. The method signature is usually the last thing I create.
      • yibg 587 days ago
        The first test is never the problem. The problem as OP pointed out is after iterating a few times you realize you went down the wrong track or the requirements have changed / been clarified. Now a lot of the tests you iterated through aren't relevant anymore.
        • majikandy 587 days ago
          Is it possible that it was those tests and code in the TDD cycle that helped you realise you’d gone down the wrong path?

          And if not, perhaps there was a preconceived idea of the code block and what it was going to do, rather than specifying the behaviour wanted via the RGR cycle. With a preconceived idea, with or without the tests, if that idea is wrong, you’ll hit the dead end and have to back track. Fortunately I find that even though I do sometimes find myself in this situation, quite often those tests can be repurposed fairly quickly rather than being chucked away, after all the tests are still software, and not hardware.

      • happytoexplain 587 days ago
        In my admittedly not-vast experience, a pattern going bad because the implementer doesn't understand it is actually only the implementer's fault a minority of the time, and is the fault of the pattern the majority of the time. This is because a pattern making sense to an implementer requires work from both sides, and which side is slacking can vary. Sometimes the people who get it and like it tend to purposefully overlook this pragmatic issue because "you're doing it wrong" seems like a golden bullet to critiques.
      • _gabe_ 587 days ago
        Reiterating the same argument in screaming case doesn't bolster your argument. It feels like the internet equivalent of a real life debate where a debater thinks saying the same thing LOUDER makes a better argument.

        > - You write one test

        Easier said than done. Say your task is to create a low level audio mixer which is something you've never done before. Where do you even begin? That's the hard part.

        Some other commenters here have pointed out that exploratory code is different from TDD code, which is a much better argument then what you made here imo.

        > I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.

        Instead of questioning the OP's qualifications, perhaps you should hold a slightly less dogmatic opinion. Perhaps OP is familiar with this style of development, and they've run into problem firsthand when they've tried to write tests for an unknown problem domain.

        • rileymat2 587 days ago
          > Some other commenters here have pointed out that exploratory code is different from TDD code, which is a much better argument then what you made here imo.

          I find that iterating on tests in exploratory code makes for an excellent driver to exercise the exploration. I don’t see the conflict between the two, except I am not writing test cases to show correctness, I am writing them to learn. To play with the inputs and outputs quickly.

        • nfhshy68 587 days ago
          I don't think GP was questioning their qualifications. Its exceedingly clear from OPs remarks they don't know what TDD is and haven't even read the article because it covers all this. In detail.
      • DoubleGlazing 587 days ago
        In my experience the write a new test bit is where it all falls down. It's too easy to skimp out on that when there are deadlines to hit or you are short staffed.

        I've seen loads of examples where the tests haven't been updated in years to take account of new functionality. When that happens you aren't really doing TDD anymore.

        • yaccz 587 days ago
          That's an issue of bad engineering culture, not TDD.
        • majikandy 587 days ago
          That also means they weren’t being run. So you aren’t even doing tests, let alone TDD.
      • unrealhoang 587 days ago
        How to write that one test without the iterative design process? That's something always missing from the TDD guides.
        • apalumbi 587 days ago
          TDD is not a testing process. It is a design process. The tests are a secondary and beneficial artifact of the well designed software that comes from writing a test first.
          • fsdghrth3 587 days ago
            > TDD is not a testing process. It is a design process.

            The article actually discusses whether this is accurate or not. TDD started out as a testing process but got adopted for its design consequences which is why there is a lot of confusion.

            Naming it test driven design would have gone a long way to help things and also resulted in less cargo culting. "Have to TDD all day or you don't do TDD"

  • sedachv 587 days ago
    TDD use would be a lot different if people actually bothered to read the entirety of Kent Beck's _Test Driven Development: By Example_. It's a lot to ask, because it is such a terribly written book, but there is one particular sentence where Beck gives it away:

    > This has happened to me several times while writing this book. I would get the code a bit twisted. “But I have to finish the book. The children are starving, and the bill collectors are pounding on the door.”

    Instead of realizing that Kent Beck stretched out an article-sized idea into an entire book, because he makes his money writing vague books on vague "methodology" that are really advertising brochures for his corporate training seminars, people actually took the thing seriously and legitimately believed that you (yes, you) should write all code that way.

    So a technique that is sometimes useful for refactoring and sometimes useful for writing new code got cargo-culted into a no-exceptions-this-is-how-you-must-do-all-your-work Law by people that don't really understand what they are doing anymore or why. Don't let the TDD zealots ruin TDD.

    • evouga 587 days ago
      This seems to be the case with a lot of "methodologies" like TDD, Agile, XP, etc. as well as "XXX considered harmful"-style proscriptions.

      A simple idea ("hey, I was facing a tricky problem and this new way of approaching it worked for me. Maybe it will help you too?") mutates into a blanket law ("this is the only way to solve all the problems") and then pointy-haired folks notice the trend and enshrine it into corporate policy.

      But Fred Brooks was right: there are no silver bullets. Do what works best for you/your team.

      • bitwize 587 days ago
        The 2000s design-patterns-mania is another case. Design patterns should be thought of less as things you have to memorize and apply in a textbook fashion, and more like tropes: things you'll see over and over in code, and once you know their names you can start talking about them and their interactions in meaningful ways. Just as writers like tropes because they make the job of writing easier, overuse of them is a sign of laziness; and so it is with design patterns.
      • cpill 587 days ago
        yeah, I find software engineers like to find absolute answers to fuzzy problems. I guess it's the nature of the job
    • joshka 587 days ago
      The fun thing about this book (which I haven't read in it's entirety) is that it really shuts down a lot of the maximalist ideas in a few places (here's one particular section).

        There are really two questions lurking here: 
        How much ground should each test cover?
        How many intermediate stages should you go through as you refactor?
        You could write the tests so they each encouraged the addition of a single line of logic and a handful of refactorings. You could write the tests so they each encouraged the addition of hundreds of lines of logic and hours of refactoring. Which should you do?
        Part of the answer is that you should be able to do either. The tendency of Test-Driven Developers over time is clear, though - smaller steps. However, folks are experimenting with driving development from application-level tests, either alone or in conjunction with the programmer-level tests we've been writing.
      • viceroyalbean 587 days ago
        Indeed. I read the book in hopes of getting a good intro to TDD after only picking it up by osmosis (which, as proven by the discussions here, is not a good way to learn TDD) and it definitely goes against the maximalist interpretation as described in TFA. While there are examples showing the minimal code-approach he is very explicit about the fact that you don't have to write your code that way.

        One thing I liked specifically was his emphasis on the idea that you can use TDD to adjust the size of your steps to match the complexity of the code. Very complex? Small steps with many tests, maybe using the minimal code-approach to get things going. Simple/trivial? A single test and the solution immediately with no awkward step in between.

    • loevborg 587 days ago
      You have got to be kidding. Beck's book - both TDD: By Example and Extreme Programming - are very well written and have about the highest signal/noise ratio of any programming books.
      • sedachv 587 days ago
        _Test Driven Development: By Example_ certainly had the highest ratio of dumb unnecessary jokes to contrived unconvincing examples of any programming book I have read. My copy of TAOCP volume 3 doesn't even begin to compare. Clearly Knuth was doing something wrong.
    • yomkippur 587 days ago
      > > This has happened to me several times while writing this book. I would get the code a bit twisted. “But I have to finish the book. The children are starving, and the bill collectors are pounding on the door.”

      I wonder how much methodologies, books are written with the same banal driver. It is somebody's livelihood and they don't pay writers to stop middle of it because they realize its flawed.

      I once found a book on triangular currency arbitrage or something like that at my library. It was 4000 pages long and the book was heavy. The book would ramble on in languages that made it difficult to follow and would be filled with mathmetical notations to the brim which really offered no value because the book was written in the 70s and it no longer offered any executable knowledge. But finance schools swear by it and speaking out would trigger a lot of people.

      TDD is a cult. Science is also a cult in that manner, it rejects the existence of what it cannot measure and it gangs up on those that go against it.

  • tippytippytango 587 days ago
    The main reason TDD hasn't caught on is there's no evidence it makes a big difference in the grand scheme of things. You can't operationalize it at scale either. There is no metric or objective test that you can run code through that will give you a number in [0, 1] that tells you the TDDness of the code. So if you decide to use TDD in your business, you can't tell the degree of compliance with the initiative or correlation with any business metrics you care about. The customers can't tell if the product was developed with TDD.

    Short of looking over every developer's shoulder, how do you actually know the extent to which TDD is being practiced as prescribed? (red, green, refactor) Code review? How do you validate your code reviewer's ability to identify TDD code? What if someone submits working tested code; but, you smell it's not TDD, what then? Tell them to pretend they didn't write it and start over with the correct process? What part of the development process to you start to practice it? Do you make the R&D people do it? Do you make the prototypers do it? What if the prototype got shipped into production?

    Because of all this, even if the programmers really do write good TDD code, the business people still can't trust you, they still have to QA test all your stuff. Because they can't measure TDD, they have no idea when you are doing it. Maybe you did TDD for the last release; but, are starting to slip? Who knows, just QA the product anyways.

    I like his characterization of TDD as a technique. That's exactly what it is, a tool you use when the situation calls for it. It's a fantastic technique when you need it.

    • mehagar 587 days ago
      You make a good point about not being able to enforce that TDD is actually followed. The best we could do is check that unit tests exist at all.

      In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.

      • agloeregrets 587 days ago
        > In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.

        The issue is that many TDD diehards believe that bugs and delays are made by coders who did not properly qualify their code before they wrote it.

        In reality, bugs and delays are a product of an organization. Bad coders can write bad tests that pass bad code just fine. Overly short deadlines will cause poor tests. Furthermore, many coders reply that they have trouble with the task-switching nature of TDD. To write a complex function, I will probably break it out into a bunch of smaller pure functions. In TDD that may require you to either: 1. Write a larger function that passes the test and break it down. 2. Write a test that validates that the larger function calls other functions and then write tests that define each smaller function.

        The problem with these flows is that 1: Causes rework and 2 ends up being like reading a book out of order, you may get to function 3 and realize that function 2 needed additional data and now you have to rewrite your test for 2. Once again rework. I'm sure there are some gains in some spaces but overall it seems that the rework burns those gains off.

        • UK-Al05 587 days ago
          You shouldn't test those smaller functions. They're internal details. They should be private.
          • iratewizard 587 days ago
            You also shouldn't test business logic. Your test code is more likely to be a liability than an asset when it isn't testing your codebase's core infrastructure.
            • klysm 587 days ago
              I totally agree with this. In practice, I see much more value in tests that fully utilize your dependencies. The hard part is tying all the shit together and not getting weird stuff on the boundaries between systems. We have to tools to make such testing reproducible but it’s underutilized.

              I want my tests to give me confidence. Unit tests don’t do nearly as good of a job as something that fully utilizes infra.

      • tippytippytango 584 days ago
        Exactly, if it made a big difference to profitability then it would be evident in the market place. TDD shops would out compete the ones that don’t use it. This doesn’t seem to happen in the market. What that means, if TDD is a benefit, it is such a small benefit that other factors in the business eclipse its impact.
    • samatman 587 days ago
      One can enforce the use of TDD through pair programming with rotation, as Pivotal does.

      I don't know that Pivotal (in particular) does pair programming so that TDD is followed, I do know that they (did) follow TDD and do everything via pair programming. I'm agnostic as to whether it's a good idea generally, it's not how I want to live but I've had a few associates who really liked it.

      • klysm 587 days ago
        Wow that sounds absolutely awful. A lot of the work I do is thinking long and hard about what I want my API to look like. It’s an iterative process and I want to be able to throw shit out a lot.
    • cpill 587 days ago
      isn't that what txt coverage is about?
      • tippytippytango 584 days ago
        Did you mean test coverage? Test coverage tells you the code was tested, but it doesn’t tell you if the programmer used TDD to write the tests.
  • reggieband 587 days ago
    I could write an entire blog post on my opinions on this topic. I continue to be extremely skeptical of TDD. It is sort of infamous but there is the incident where a TDD proponent tries and fails to develop a sudoku solver and keeps failing at it [1].

    This kind of situation matches my experience. It was cemented when I worked with a guy who was a zealot about TDD and the whole Clean Code cabal around Uncle Bob. He was also one of the worst programmers I have worked with.

    I don't mean to say that whole mindset is necessarily bad. I just found that becoming obsessed with it isn't sufficient. I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues. And I've worked with guys who get on their high horse about TDD but can't ship code on time, or it is too slow, and it has constant issues in production.

    No amount of rationalizing about the theoretical benefits can match my experience. I do not believe you can take a bad programmer and make them good by forcing them to adhere to TDD.

    1. https://news.ycombinator.com/item?id=3033446

    • commandlinefan 587 days ago
      > tries and fails to develop a sudoku solver and keeps failing at it

      But that's because he deliberately does it in a stupid way to make TDD look bad, just like the linked article does with his "quicksort test". But that's beside the point - of course a stupid person would write a stupid test, but that same stupid person would write a stupid implementation, too... but at least there would be a test for it.

    • laserlight 587 days ago
      Top-most comment to the link you provided pretty much explains the situation. TDD is a software development method, not a generic problem solving method. If one doesn’t know how a Sudoku solver works, applying TDD or any other software development method won’t help.
      • sidlls 587 days ago
        One of the theses of TDD is that the tests guide the design and implementation of an under specified (e.g. unknown) problem, given the requirements regarding the outcomes and a complete enough set of test cases. “Theoretically” one should be able to develop a correct solver without knowing how it works by iterative improvements using TDD. It might not be of good quality, but it should work.

        Note: I am quite skeptical of TDD in general.

        • nightski 586 days ago
          I don't really use TDD, but I've never heard that TDD would help guide the implementation. I always understood it was about designing a clean interface to the code under test. This being a result from the fact that you are designing the interface based on actual use cases first since the test needs to call into the code under test. It helps avoid theoretical what ifs and focus on concrete, simple design.

          Personally I think that one can learn this design methodology without TDD. I find learning functional programming and say Haskell/OCaml/SML/etc.. far more beneficial to better design here than I do TDD.

          • sidlls 585 days ago
            It’s both.

            In theory TDD drives the interface by ensuring the units under test do what they’re intended (implementation), and that each and every unit is “testable” (interface).

            TDD doesn’t really care about “clean” interfaces, only that units of work (functions, methods) are “testable”.

            I’d argue this actually creates friction for designing clean interfaces, because in order to satisfy the “testability” requirement one is often forced to make poor (in terms of readability, maintainability, and efficiency) design choices.

    • mikkergp 587 days ago
      >I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues.

      I'm curious to unpack this a bit. I'm curious what other tools people use other than testing programatic testing; programatic testing seems to be the most efficient, especially for a programmer. I'm also maybe a bit stuck on the binary nature of your statement. You know developers who've never let a bug or performance issue enter production(with or without testing)?

      • reggieband 587 days ago
        Originally when I started out in the gaming industry in the early 2000s. There were close to zero code tests written by developers at that time at the studios I worked for. However, there were large departments of QA, probably in the ratio of 3 testers per developer. There was also an experimental Test Engineer group at one of the companies that did automated testing, but it was closer to automating QA (e.g. test rigs to simulate user input for fuzzing).

        The most careful programmers I worked with were obsessive about running their code step by step. One guy I recall put a breakpoint after every single curly brace (C++ code) and ensured he tested every single path in his debugger line by line for a range of expected inputs. At each step he examined the relevant contents of memory and often the generated assembly. It is a slow and methodical approach that I could never keep the patience for. When I asked him about automating this (unit testing I suppose) he told me that understanding the code by manually inspecting it was the benefit to him. Rather than assuming what the code would (or should) do, he manually verified all of his assumptions.

        One apocryphal story was from the PS1 days before technical documentation for the device was available. Legend had it that the intrepid young man brought in an oscilloscope to debug and fix an issue.

        I did not say that I know any developers who've never let a bug or performance issue enter production. I'm contrasting two extremes among the developers I have worked with for effect. Well written programs and well unit tested programs are orthogonal concepts. You can have one, the other, both or neither. Some people, often in my experience TDD zealots, confuse well unit tested programs with well written programs. If I could have both, I would, but if I could only have one then I'll take the well-written one.

        Also, since it probably isn't clear, I am not against unit testing. I am a huge proponent for them, advocating for their introduction alongside code coverage metrics and appropriate PR checks to ensure compliance. I also strongly push for integration testing and load testing when appropriate. But I do not recommend strict TDD, the kind where you do not write a line of code until you first write a failing test. I do not recommend use of this process to drive technical design decisions.

      • Chris_Newton 587 days ago
        You know developers who've never let a bug or performance issue enter production(with or without testing)?

        One of the first jobs I ever had was working in the engineering department of a mobile radio company. They made the kind of equipment you’d install in delivery trucks and taxis, so fleet drivers could stay in touch with their base in the days before modern mobile phone technology existed.

        Before being deployed on the production network, every new software release for each level in the hierarchy of Big Equipment was tested in a lab environment with its own very expensive installation of Big Equipment exactly like the stations deployed across the country. Members of the engineering team would make literally every type of call possible using literally every combination of sending and receiving radio authorised for use on the network and if necessary manually examine all kinds of diagnostics and logs at each stage in the hardware chain to verify that the call was proceeding as expected.

        It took months to approve a single software release. If any critical faults were found during testing, game over, and round we go again after those faults were fixed.

        Failures in that software were, as you can imagine, rather rare. Nothing endears you to a whole engineering team like telling them they need to repeat the last three weeks of tedious manual testing because you screwed up and let a bug through. Nothing endears you to customers like deploying a software update to their local base station that renders every radio within an N mile radius useless. And nothing endears you to an operations team like paging many of them at 2am to come into the office, collect the new software, and go drive halfway across the country in a 1990s era 4x4 in the middle of the night to install that software by hand on every base station in a county.

        Automated software testing of the kind we often use today was unheard of in those days, but even if it had been widely used, it still wouldn’t have been an acceptable substitute for the comprehensive manual testing prior to going into production. As for how the developers managed to have so few bugs that even reached the comprehensive testing phase, the answer I was given at the time was very simple: the code was extremely systematic in design, extremely heavily instrumented, and subject to frequent peer reviews and walkthroughs/simulations throughout development so that any deviations were caught quickly. Development was of course much slower than it would be with today’s methods, but it was so much more reliable in my experience that the two alternatives are barely on the same scale.

    • wglb 587 days ago
      I think this whole failed puzzle indicates that there are some problems that cannot be solved incrementally.

      Peter Norvig's solution has one central precept that is not something that you would arrive at by an incremental approach.

      But I wonder if this incrementalism is essential for TDD.

  • danpalmer 587 days ago
    Like almost every spectrum of opinions, the strongest opinions are typically the least practical, and useful only in a theoretical sense and for evolving the conversation in new directions.

    I think TDD has a lot to offer, but don't go in for the purist approach. I like Free Software but don't agree with Stallman. It's the same thing.

    The author takes a well reasoned, mature, productive, engineering focused approach, like the majority of people should be doing. We shouldn't be applying the pure views directly, we should be informed by them and figure out what we can learn for our own work.

    • discreteevent 587 days ago
      This was the funny thing about extreme programming. I remember reading the book when it came out. In it Kent Beck more or less said that he came up with the idea because waterfall was so entrenched that he thought the only way to move the dial back to something more incremental was to go to the other extreme end.

      This took off like wildfire probably for the same reason that we see extreme social movements/politics take off. People love purity because it's so clean and tidy. Nice easy answers. If I write a test for everything something good will emerge. No need for judgement and hand wringing.

      But the thing is that I think Kent Beck got caught up in this himself and forgot the original intention. I could be wrong but it seems like that.

      • ad404b8a372f2b9 587 days ago
        Increasingly I've been wondering whether these agile approaches might be a detriment to most open source projects.

        There is a massive pool of talented and motivated programmers that could contribute to open source projects, much more massive than any company's engineering dept, yet most projects follow a power law where a few contributors write all the code.

        I think eschewing processes and documentation in favour of pure programming centered development, where tests & code serve as documentation and design tools, means the barrier to entry is much higher, and onboarding new members is bottlenecked by their ability to talk with the few main contributors.

        The most successful open source projects have a clear established process for contributing and a lot of documentation. But the majority don't have anything like that, and that's only exacerbated by git hosting platforms that put all their emphasis on code over process. I wonder whether setting up new tools around git allowing for all projects to follow the waterfall or a V-cycle might improve the contribution inequality.

    • totetsu 587 days ago
      But we need to use FDD to use the full spectrum of options.
  • Joker_vD 587 days ago
    The fact that some people really argue that TDD produce better designs... sigh. Here, look at this [0] implementation of Dijkstra's algorithm, written by Uncle Bob himself. If you think that is well-designed (have you ever seen weighted graphs represented like this?) then, well, I guess nothing will ever sway your opinion on TDD. And mind you, this is a task that does have what a top comment in this very thread calls a "well defined spec".

    [0] https://blog.cleancoder.com/uncle-bob/2016/10/26/DijkstrasAl...

    • codeflo 587 days ago
      What the actual fuck… I only got two pages down and already found several red flags that I would never accept in any code review. Not the least of which is that when querying an edgeless graph for the shortest path from node A to node Z, “the empty path of length 0” is the exact opposite of a correct answer.

      So thanks for the link, I guess. I’ll keep this as ammunition for the next time someone quotes Uncle Bob.

      • sushisource 587 days ago
        Damn, indeed. The Uncle Bob people (or, really, any "this book/blog post/whatever says to do technique x" people) are my absolute least favorite. This is a good riposte. Or, alternatively, if they don't understand why it's bad then you know they're a shit coder.
    • jonstewart 587 days ago
      In my personal experience, TDD helps me produce better designs. But, thinking also helps me produce better designs, too. There's a lot of documentation that Creepy Uncle Bob isn't the most thoughtful person, and I think this blog post says much more about him than about TDD.

      The code is definitely a horrow show.

    • rmetzler 587 days ago
      Can you link to an implementation you would consider great?

      I would just like to compare them. I too find Uncle Bobs “clean code” book very much overrated.

      My understanding of the “design” aspect of TDD is, that you start from client code and create the code that conforms to your tests. Too often I worked in a team with other developers and I wanted to use what they wrote, and they somehow coded what was part of the spec, but it was unusable from my code. Only because I was able to change their code (most often the public API) I was able to use it.

      • whimsicalism 587 days ago
        It stores it as a collection of edges? Why not use adjacency list representation?

        You iterate through all of the edges every time to find a nodes neighbors?

        idk, this code just looks terrible to me.

        • sdevonoes 587 days ago
          But TDD (the main topic being discussed here) has nothing to do with that, right? I mean, how on earth is TDD going to help you decide between a) using a simple data structure like a collection and b) a more sophisticated data structure like the adjacency list, if you have no idea what an adjacency list is?
          • whimsicalism 587 days ago
            Yeah I was only commenting on what was being discussed in this particular subthread about whether this was good code/design.
        • rmetzler 587 days ago
          But now you have the tests to be able to refactor the implementation and improve it.

          I’ve been in too many projects where devs almost never write tests. They cut corners by neither writing tests nor documentation because of time pressure. Then the code breaks in production with simple edge cases like NullPointerException and they need to fix it, so they don’t have time to write unit tests for the next feature. And it’s definitely harder to write tests after you implemented something.

  • JonChesterfield 587 days ago
    There's some absolute nonsense in the TDD style. Exposing internal details for test is recommended and bad for non-test users of the interface. Only testing through the interface (kind of the same as above) means tests contort to hit the edge cases or miss them entirely.

    The whole interface hazard evaporates if you write the tests in the same scope as the implementation, so the tests can access internals directly without changing the interface. E.g. put them in the same translation unit for C++. Have separate source files only containing API tests as well if you like. Weird that's so unpopular.

    There's also a strong synergy with design by contract, especially for data structures. Put (expensive) pre/post and invariants on the methods, then hit the edge cases from unit tests, and fuzz the thing for good measure. You get exactly the public API you want plus great assurance that the structure works, provided you don't change semantics when disabling the contract checks.

    • rmetzler 587 days ago
      It’s similar in Java, where people often only know about public and private, and forget about package scoped functions. You can use these to test utility functions etc.

      The post is weird, I agree with almost everything in the first half and disagreed with most of the second part.

      What makes TDD hard for integration testing is that there are no simple readymade tools similar to XUnit frameworks and people need to build their own tools and make them fast.

  • marginalia_nu 587 days ago
    I've always sort of thought of TDD a bit of a software development methodology cryptid. At best you get shaky camcorder footage (although on closer investigation it sure looks like Uncle Bob in a gorilla suit).

    Lots of shops claim to do TDD, but in practice what they mean is that they sometimes write unit tests. I've literally never encountered it outside of toy examples and small academic exercises.

    Where is the software successfully developed according to TDD principles? Surely a superior method of software development should produce abundant examples of superior software? TDD has been around for a pretty long time.

    • gnulinux 587 days ago
      In my current company, I'm practicing TDD (not religiously, in a reasonable way). What this means for us (for me, my coworkers and my manager):

      1. No bug is ever fixed before we have at least one failing test. Test needs to fail, and then turn green after bugfix. [1]

      2. No new code ever committed without a test specifically testing the behavior expected from the new code. Test needs to fail, and then turn green after the new code.

      3. If we're writing a brand new service/product/program etc, we first create a spec in human language. Turn the spec into tests. This doesn't mean, formally speaking "write tests first, code later" because we do write tests and code at the same. It's just that everything in the spec has to have an accompanying test, and every behavior in the code needs to have a test. This is checked informally.

      As they say, unittests are also code, and all code has bugs. In particular, tests have bugs too. So, this framework is not bullet-proof either, but I've personally been enjoying working in this flow.

      [1] The only exception is if there is a serious prod incident. Then we fix the bug first. When this happens, I, personally, remove the fix, make sure a test fails, then add the fix back.

      • int_19h 587 days ago
        Of all your tests, what is the proportion of tests that test exceptional code paths vs regular flow?
    • fsdghrth3 587 days ago
      I use TDD as a tool. I find it quite heavy handed for maintenance of legacy code where I basically know the solution to the task up front. I can either just rely on having enough existing coverage or create one test for my change and fix it all in one step.

      The times I actually use TDD are basically limited to really tricky problems I don't know how to solve or break down or when I have a problem with some rough ideas for domain boundaries but I don't quite know where I should draw the lines around things. TDD pulls these out of thin air like magic and they consistently take less time to reach than if I just sit there and think about it for a week by trying different approaches out.

    • fiddlerwoaroof 587 days ago
      I’ve worked at a place where we did TDD quite a bit. What I discovered was the important part was knowing what makes code easy to test and not the actual TDD methodology.
    • twic 587 days ago
      I've worked at three companies that did TDD rigorously. It absolutely does exist.
      • klysm 587 days ago
        Was it worth it? In what languages?
        • twic 587 days ago
          I thought it was great. I found that TDD forced me think through the functionality i was about to add before writing the code. Base cases, corner cases, what the API should look like, etc. Then good at making sure i actually did it properly. Reasonably useful for making sure nobody broke it later, but not cast-iron.

          TDD sometimes doesn't feel as fast as just smashing out code, but i honestly think it produces good-quality code at a faster and more consistent rate.

          It was almost all in Java, with a bit of JavaScript. Some people at one company did Ruby, and did TDD in that, but i never did.

  • elboru 587 days ago
    One of the biggest issues with our industry is the ambiguity in our definitions. The author mentions “unit tests” as if it was a well defined term. But some people understand “unit” as a class, other understand it as a module, others as a behavior. Some TDDers write unit tests that would be considered “integration tests” by other developers.

    Then we have TDD itself, there are at least two different schools of TDD. What the author calls “maximal TDD” sounds like the mockist school to me. Would his criticism also apply to the classical school? I’m sincerely curious.

    If we don’t have a common ground, communication becomes really difficult. Discussion and criticism becomes unfruitful.

  • ImPleadThe5th 587 days ago
    My personal mentality about TDD is that it is an unreachable ideal. Striving for it puts you on a good path, but business logic is rarely so straight forward.

    If you are lucky enough to be writing code in a way that each unit is absolutely clear before you start working, awesome you got it. But in business-logic-land things rarely end up this clean.

    Personally, I program the happy path then write tests and use them to help uncover edge cases.

    • radus 587 days ago
      > I program the happy path then write tests and use them to help uncover edge cases.

      This approach resonates with me as well. I would add that writing tests when investigating bugs or deviations from expected behavior is also useful.

  • zwieback 587 days ago
    A lot of the software engineering approaches from that era (refactoring, TDD, patterns) make more sense in the world I grew up in: large pre-compiled code bases where everything other than the base OS layer is under the engineer's control. If you have to ship your SW as an installable which will end up on someone's machine far away your mindset will be more defensive.

    In this day and age of vastly distributed systems where distribution and re-distribution is relatively cheap we can afford to be a little less obsessive. Many exceptions still exist, of course, I would think that the teams developing my car's control system might warm up to TDD a bit more than someone putting together a quickie web app.

    • buscoquadnary 587 days ago
      I think you make an important point. It used to be I'd have to worry about the OS layer, and that was it. Now I have half a dozen layers running between my code and the actual die executing instructions and as consequence of that I've lost a considerable amount of control.

      The funny thing is I end spending just as much time trying to debug or figure out the other layers, looking at you AWS IAM, that I don't feel I am that much more productive, I've just taken what my code needed to do and scatterred it to the 4 winds. Now instead of dealing with an OS and the code I'm fighting with docker, and a cloud service, and permissions and network and a dozen other things.

      Honestly this feels like the OOP hype era of Object Database and Java EE all over again, just this time substitute OOP for tooling.

  • 3pt14159 587 days ago
    This has been rehashed a million times.

    My view is that TDD is great for non-explorative coding. So data science -> way less TDD. Web APIs -> almost always TDD.

    That said, one of the things I think the vast majority of the leans anti-TDD crowd misses is that someone else on the team is picking up the slack for you and you never really appreciated it. I've joined too many teams, even great ones, where I needed to make a change to an endpoint and there were no functional or integration tests against it. So now I'm the one that is writing the tests you should have written. I'm the one that has to figure out how the code should work, and I'm the one that puts it all together in a new test for all of your existing functionality before I can even get started.

    Had you written them in the first place I would have had a nice integration test that documents the intended behaviour and guards against regressions.

    Basically I'm carrying water for you and the rest of the team that has little to do with my feature.

    Now there are some devs out there that don't need TDD to remember to write tests, but I don't know many of them and they're usually writing really weird stuff (high performance or video or whatever).

    But I have stopped concerning myself with changing other peoples minds on this. Some people have just naturally reactive minds and TDD isn't what they like so they don't do it.

    • randomdata 586 days ago
      I find the opposite. TDD is great when you don't know what your program should look like. It gives you an opportunity to simulate how the program will be used on the outside, able to be quickly iterated upon until satisfaction, without having to write all the laborious internal code (often over and over again when exploring without TDD). Once you are happy with the result, then you just have to back and fill in the guts once.

      If you know exactly what you need upfront, you can simply start coding, adding a sprinkling of acceptance tests to help catch mistakes. No need for TDD in that case.

      • 3pt14159 586 days ago
        Really?

        When I get to a new database and don't even know what data is stored where I don't write a test first. I write a bunch of SQL scripts and then maybe take it into a python toolkit for stats stuff. When training a classifier and having to choose things like dimensionality I find that exploring what the dimensions actually express teaches me more about the dataset and the approach faster than starting with a test. Sometimes I don't even know what opportunities are in the data that I'm going through, so how would I even express the test?

        That said, I'll try it your way next time and see how it goes. If it works for you maybe I'll learn how to make it work for me, since I love TDD.

        As for knowing exactly what you need upfront, why start coding? Why not do TDD? I find the interfaces are more naturally expressed as the consumer than the implementer.

        I rarely find myself writing unnatural interfaces when starting with the test, and by starting with the test it makes abstractions that must be faked / mocked easier to slide into the code that implements the feature without too much damage to the rest of the codebase. I avoid them whenever possible, but sometimes a network call must be mocked and it's better to do so with minimal collateral damage.

        • randomdata 586 days ago
          > I write a bunch of SQL scripts and then maybe take it into a python toolkit for stats stuff.

          That sounds a lot like a test (or set of tests). It would appear that you are already doing something akin to TDD, although perhaps without the formality of turning your exploratory work into documentation for other developers to read about what you learned.

          > Why not do TDD? I find the interfaces are more naturally expressed as the consumer than the implementer.

          If exploration is necessary to find the right interface that may be worthwhile, but I find when the requirements are well defined the interface is obvious before writing either tests or implementation. It may be still worth writing unit tests up front, but it is not testing that is driving your development. The requirements are driving your development in that case.

  • moomoo11 587 days ago
    I like TDD. It’s just a tool on our tech belt. If done right (takes practice and open mind tbh) the major benefit is you have code that is single responsibility and easy to understand, isolate, or modify.

    We have so many things on our tech belt, like clean architecture or x pattern. This is just another tool, and I think it helps especially in building complex software.

    Just be practical and don’t try to be “the 100%er” who is super rigid about things. Go into everything with a 80/20 mindset. If this is something mission critical and needs to be as dependable as possible, then use the tools best suited for it. If you’re literally putting buttons on the screen which Product is going to scrap in two weeks, maybe use TDD for the code responsible for dynamically switching code based on Product mindset that week.

  • MattPalmer1086 587 days ago
    I once tried to write something using a pure TDD approach. It was enlightening.

    Pluses were refactoring was easy and I had confidence that the system would work well at the end.

    Minuses were it took a lot longer to write and I had to throw away a lot of code and tests as my understanding increased. It slowed down exploration immensely. Also, factoring the code to be completely testable led to some dubious design decisions that I wouldn't make if I wasn't following a pure TDD approach.

    On balance I decided it wasn't a generally good way to write code, although I guess there may be some circumstances it works well for.

  • joshstrange 587 days ago
    I'm not anti-TDD necessarily but I've yet to see tests yield useful results at almost everyone company I've worked it. It could be I've just never worked with someone who was actually good at tests.

    Tests in general aren't something I regularly use and a lot of TDD feels somewhat insane to me. You can write all the tests you want ahead of time but until the rubber meet the road it's a lot of wishful thinking in my experience. Also it makes refactoring hell since you often have to rewrite all the tests except the ones at the top level and sometimes even those if you change enough.

    I believe tests can work, I've just never really seen them work well expect for very well defined sets of functionality that are core to a product. For example I worked at a company that had tests around their geofencing code. Due to backfilling data, zones being turned on/off by time, exception zones within zones, and locations not always being super accurate, the test suite was impressive. Something like 16 different use cases it tested for (to determine if a person was in violation for a given set of locations, for a given time). However, at the same company, there was a huge push to get 80%+ code coverage. So many of our tests were brittle that we ended up shipping code regularly with broken tests because we knew they couldn't be trusted. The tests that were less brittle often had complicated code to generate the test data and the test expectations (who tests the tests?). In my entire time at that company we very rarely (I want to say "never" but my memory could be wrong) had a test break that actually was pointing at a real issue, instead the test was just brittle or the function changed and someone forgot to update the test. If you have to update the test every time you touch the code it's testing... well I don't find that super useful, especially coupled with it never catching real bugs.

    In a lot of TDD/Tests in general tutorials I've seen they make it seem all roses and sunshine but their examples are simple and look nothing like code I've seen in the wild. I'd be interested in some real-world code and the tests as the evolved over time.

    All that said, I continue to be at least interested in tests/TDD in the hopes one day it will "click" for me and not see just like a huge waste of time.

  • woeirua 587 days ago
    TDD is great for some types of code, where the code is mostly self-contained with few external dependencies and the expected inputs and outputs are well defined and known ahead of time.

    TDD is miserable for code that is dependent on data or external resources (especially stateful resources). In most cases, writing "integration" tests feels like its not worth the effort given all the code that goes into managing those external resources. Yes, I know about mocking. But mocking frameworks are: 1 - not trivial to use correctly, and 2 - often don't implement all the functionality you may need to mock.

    • evouga 587 days ago
      I completely agree. I'll use TDD when implementing a function "where the code is mostly self-contained with few external dependencies and the expected inputs and outputs are well defined and known ahead of time" and where the function is complex enough that I'm uncertain about its correctness. Though I find I usually do property testing, or comparison to a baseline on random inputs, similar to the quicksort example in the blog post (against a slow, naive implementation of the function; or an older version of the function, if I'm refactoring) rather than straight TDD.

      When debugging, I'll also turn failure cases into unit tests and add them to the CI. The cost to write the test has already been paid in this case, so using them to catch regressions is all-upside.

      System tests are harder to do (since they require reasoning about the entire program rather than single functions) but in my experience are the most productive, in terms of catching the most bugs in least time. Certainly every minute spent writing a framework for mocking inputs into unit tests should probably have been spent on system testing instead.

    • int_19h 587 days ago
      I would go so far as to say that if your code requires extensive mocking to be tested at a certain level of granularity, then unit tests are the wrong choice at that level. Every time I've seen a test suite with abundant mocking, it was mostly testing the mocks - numerous tests get broken by the slightest refactoring and require attention, while real-world integration bugs fly straight through.

      IMO functional and integration testing should be where effort is spent first and foremost. If there are resources beyond that, then do finer grained unit testing, but even then closely tracking the amount of time spent on all the scaffolding necessary for it vs the benefits for what gets tested.

    • zoomablemind 587 days ago
      >...TDD is great for some types of code, where the code is mostly self-contained with few external dependencies and the expected inputs and outputs are well defined and known ahead of time.

      I find that TDD is very well fit to fix the expectations from the external dependencies.

      Of course, when such dependency is extensive, like an API wrapper, then writing equally extensive tests would be redundant. Even then, the core aspects of the external dependencies should be fixed testably.

      Testing is a balance game, even with TDD. The goal is to increase certainty under dynamic changes and increasing complexity.

  • fleddr 587 days ago
    My feelings are far less complicated: TDD is a high-discipline approach to software development, and that's why it doesn't work or doesn't get done.

    High-discipline meaning, it entirely depends on highly competent developers (able to produce clean code, deep understanding of programming), rigorously disciplined out of pure intrinsic motivation, and even able to do this under peak pressure.

    Which is not at all how most software is built today. Specs are shit so you gradually find out what it needs to do. Most coders are bread programmers and I don't mean that in any insulting way. They barely get by getting anything to work. Most projects are under very high time pressure, shit needs to get delivered and as fast as possible. Code being written in such a way that it's not really testable. We think in 2 week sprints which means anything long term is pretty much ignored.

    In such an environment, the shortest path is taken. And since updating your tests is also something you can skip, coverage will sink. Bugs escape the test suite and the belief in the point of TDD crumbles. Like a broken window effect.

    My point is not against TDD. It's against ivory tower thinking that does not take into account a typical messy real world situation.

    I've noticed a major shift in the last decade. We used to think like this, in TDD, in documenting things with UML, in reasoning about design patterns. It feels like we lost it all, as if it's all totally irrelevant now. The paradigm is now hyper speed. Deliver. Fast. In any way you can.

    This short-sighted approach leading to long term catastrophe? Not even that seems to matter anymore, as the thing you're working on has the shelf life of fish. It seems to be business as usual to replace everything in about 3-5 years.

    The world is really, really fast now.

  • stonemetal12 587 days ago
    I am not a TDD person, but when you write some code you want to see if it works. So you either write a unit test, or you plugin your code and do the whole song and dance to get execution to your new code.

    I see TDD is REPL driven development for languages without a REPL. It allows you to play with your code in a tighter feed back loop, than you generally have without it.

    • JonChesterfield 587 days ago
      It's closer to a repl with save state and replay. A repl will get the code working faster than tests but doesn't easily allow rechecking the same stuff later when things change (either your code or the users of it). I haven't seen a repl with save&replay but that might be a really efficient way to write the unit tests.
      • sedachv 587 days ago
        You just copy-and-paste the relevant input-output and there is your test. There isn't a need for any extra tools when using the REPL to come up with regression tests (obviously a REPL cannot be used to do TDD).
  • AtNightWeCode 587 days ago
    The productivity rate went through the roof when we ditched TDD. TDD has a bit of the same problem as strict DDD. You spend a lot of time making upfront decisions about things that does not really matter or you don’t know about.

    I see unit tests as a tool to be used where it makes sense and I use it a lot. It is true that testable code is better. Testability should be a factor when selecting tech.

    • righttoolforjob 587 days ago
      I agree with your first sentence, but TDD and unit tests are completely diametrical concerns.

      Unit tests serve multiple purposes. Number 1 is to have a way for you to play around with your design. Then it also can document requirements. I then lastly serves as a vehicle for you to prove something about your design, typically the fulfillment of a requirement, or the handling of some edge case, etc. This last part is what people unfortunately mostly refer to as a test.

      TDD says that you should write your tests before you even have a design, one-by-one typically, adding in more functionality and design as you go. You will end up with crap code. If you do not throw the first iteration away then you will commit crappy code.

      Most people naturally find that an iterative cycle of design and test code works the best and trying to sell TDD to them is a harmful activity, because it yields no benefits and might actually be a big step backwards.

      • AtNightWeCode 587 days ago
        Unit test has at least three different meanings so I think the term should be scrapped. Here I meant basically automated tests.

        I worked with TDD and you basically write twice as much code that is four times as complicated and then you stick with poor design choices cause you have to update all the tests as well.

  • dbrueck 587 days ago
    It's all about tradeoffs. I've done a few decades of non-TDD with a middle period of ~5 years of zealot-level commitment to TDD, and as a rule of thumb, the cost is usually not worth the benefit.

    Some hidden/unexpected side effects of TDD include the often extremely high cost of maintaining the tests once you get past the simple cases, the subtle incentive to not think too holistically about certain things, and the progression as a developer in which you naturally improve and stop writing the types of bugs that basic tests are good at catching but which you continue to write anyway (a real benefit, sure, but one that further devalues the tests). The cost of creating a test that would have caught the really "interesting" bugs is often exorbitant, both up front and to maintain.

    The closest thing I've encountered to a reliable exception is that having e.g. a comprehensive suite of regression tests is really great when you are doing a total rewrite of a library or critical routine. But even that doesn't necessarily mean that the cost of creating and maintaining that test suite was worth it, and so far every time I've encountered this situation, it's always been relatively easy to amass a huge collection of real world test data, which not only exercises the code to be replaced but also provides you a high degree of confidence that the rewrite is correct.

  • sandreas 587 days ago
    In my opinion TDD is a good thing, but too demanding and too strict. In real life, there are very different knowledge / experience levels in a development team and if TDD is not applied professionally, it may not help. It just needs a lot of practise and experience.

    What it helps with a lot is improving your individual programming skills. So I recommend TDD to everyone, who never did it in practise (best case on a legacy code base) - if not to improve the code itself, then just to LEARN how it could improve your code.

    It helped me to understand, why IoC and Dependency Injection are a thing and when to use it. Writing "testable" code is important, while writing real tests may be not as important, as long as you do not plan to have along running project or do a major refactoring. If you ARE planning a major refactoring, you should first write the tests to ensure you don't break anything, though ;)

    What I also would recommend is having a CI / Build-Environment supporting TDD, SonarQube and CodeCoverage - not trying to establish that afterwards... Being able to switch to TDD is also a very neat way to get a nice CI setup.

    My feeling is, that my programming and deployment skills improved best, when I did one of my personal pet projects strictly test driven with automated CI and found out about the things in TDD and CI, I really need to care about.

  • Sohcahtoa82 587 days ago
    I got turned off from TDD in my senior year getting my CS degree.

    During class, the teacher taught us TDD, using the Test-Code-Refactor loop. Then he wanted us to write an implementation of Conway's Game of Life using TDD. As the students were doing it, he was doing it as well.

    After the lesson but before the exercise, I thought "This looks tedious and looks like it would make coding take far longer than necessary" and just wrote the Game first, then wrote a couple dozen tests. Took me probably about 45 minutes.

    At that point, I looked up on the projector and saw the teacher had barely done much more than having a window, a couple buttons, and some squares drawn on it, and a dozen tests making sure the window was created, buttons were created, clicking the button called the function, and that the calls to draw squares succeeded.

    What really bothers me about "true" TDD (and TFA points this out), is that if you're writing bare minimum code to make a unit test pass, then it will likely be incorrect. Imagine writing an abs() function, and your first test is "assert (abs(-1) == 1)". So you write in your function "if (i == -1) return 1". Congrats, you wrote the bare minimum code. Tadaa! TDD!

    • gnulinux 587 days ago
      I'm really sorry but these kind of "I mocked a simple program in 45 mins when my TDD practicing counterpart took longer" comments mean nothing. The code is never written once and done. If you were maintaining that code for the next 6 years and there was no rush to ship it, it absolutely doesn't matter how fast it was written in the first place. I would much rather take code that was written better in 6 hours, than bad code written in 45 mins. I'm not saying you wrote bad code, but time to ship, in general, rarely matters in this context.
      • tester756 587 days ago
        >If you were maintaining that code for the next 6 years and there was no rush to ship it, it absolutely doesn't matter how fast it was written in the first place.

        Unfortunately that's not how software is written.

        >I would much rather take code that was written better in 6 hours, than bad code written in 45 mins. I'm not saying you wrote bad code, but time to ship, in general, rarely matters in this context.

        I disagree, time is not the best proxy for good code.

        10x engineer will probably write way better code in 45mins than -10x engineer in 6h.

  • dangarbri3 587 days ago
    The method that works best for me is from code complete. McConnell argues (correctly, IMO) that tests are a technique, and writing tests means more code to maintain and debug. Tests can be wrong, and if the test is wrong, the code will also be wrong. Thus, a bug is introduced. He advocates first making sure you have good software design, i.e. components are laid out, how they're going to interact with each other is well defined. So before writing any code at all, make sure you have a documented design that works conceptually.

    Define the systems that make up your software, the classes that make up the systems, and then the functions they'll use to talk to each other. Once it works on paper, start coding. If the design is good, the code should be so brain dead simple to write that a monkey could write it.

    I find doing this I do end up with long call stacks, because each module will do something, then pass the data on like an assembly line. In each step my functions are super short and I don't find it worth writing a test for 3 lines of code that I can tell is correct at a glance. For those few meaty functions that do more heavy logic, I will write tests for, though.

  • ttctciyf 587 days ago
    IMO, a lot of sage advice about TDD, well informed by years of practice, is in two Ian Cooper NDC talks, his controversial-at-the-time "TDD, Where Did It All Go Wrong?"[1] and, seven years later, "TDD Revisited"[2].

    The blurb from the latter:

    > In this talk we will look at the key Fallacies of Test-Driven Development, such as 'Developers write Unit Tests', or 'Test After is as effective as Test First' and explore a set of Principles that let us write good unit tests instead. Attendees should be able to take away a clear set of guidelines as to how they should be approaching TDD to be successful. The session is intended to be pragmatic advice on how to follow the ideas outlined in my 2013 talk "TDD Where Did it All Go Wrong"

    The talks focus on reasons to avoid slavish TDD and advocate for the benefits of judiciously applying TDD's originating principles.

    1: https://www.youtube.com/watch?v=EZ05e7EMOLM

    2: https://www.youtube.com/watch?v=vOO3hulIcsY

  • pjmlp 587 days ago
    My feelings are quite clear, it just doesn't work besides some simple cases without any kind of GUI (including native ones), or distributed computing algorithms.

    It does for nice conference talks though.

    • gnulinux 587 days ago
      When you say TDD doesn't work do you mean it doesn't work if it's religiously practiced? I've worked for many companies who do TDD and I personally enjoy it very much and we ship code, we make money. So clearly something is not not working. I think the trick with TDD is making sure you don't use it religiously and understand in what cases it'll help you.
      • pjmlp 587 days ago
        Prove me wrong designing a good native Windows application according to customer's UI/UX guidelines by religiously following TDD.

        Replace Windows by favourite desktop, mobile or console OS.

        • gnulinux 587 days ago
          I don't do Windows work, I don't do UI/UX and I do not religiously follow TDD. TDD is a tool, just like other tools I know when it applies to a scenario and when it applies I know what it's helping me with.
          • pjmlp 587 days ago
            That isn't how it is sold.

            It only works for basic use cases and conference talks.

            • gnulinux 587 days ago
              I'm not trying to sell anything. I'm reporting you an anecdote that I'm a practicing software engineer, I've been professionally writing code for almost a decade and I do use TDD when I write code. I don't care if you do or do not.
              • pjmlp 587 days ago
                Great for you, for me it is snake oil that quickly shows its weakness when I ask someone to write a fullblow application end to end with TDD, every single aspect of it.

                Designing a GUI with test first, doing a game engine with tests first, handling distributed computing algorithms with test first,...

                Including the best data structures for handling the set of application requirements.

                Yeah, not really.

    • int_19h 587 days ago
      It's an interesting point. I do recall that TDD craze in early 00s started in the RESTful web crowd; do you think that this is because it didn't have to deal with the "complicated bits"?
      • pjmlp 587 days ago
        Yes, and basic stuff where the mantra of not writing code without tests is actually possible, apply it to any complex scenario where that isn't no longer the case and suddenly it is a case of we are holding it wrong.
  • geodel 587 days ago
    The way I see, all this cultish crap: Agile, TDD, Scrum, Kanban, XP etc..etc works when essentially same thing is done nth time. I have seen plenty of success with these when same project is roughly repeated for many different clients.

    It is also no surprise these terms have mostly to do with IT or related consulting and not really about engineering endeavor. In my first hand experience when I worked at engineering department there was whole lot of work done with almost non-existent buzzword bullshit. And later on with merger etc it is now an IT department so there is endless money on process training, resources, scrum masters and so on but little money is left for half decent computer setup.

    Outside work I have seen this in my cooking, first time new dish is a hustle but in future iterations I would create little in-brain task list tickets for my own processing. Doing this in jackasstic consulting framework way would turn 1 hr worth of butter chicken recipe into 1 month work of taste feature implementation sprints.

  • bonestamp2 587 days ago
    We follow what we call TID (Test Informed Development).

    Basically, we know that we're going to have to write tests when we're done, so we are sure to develop it in a way that is going to be (relatively) easy to write accurate and comprehensive tests for.

  • bndr 587 days ago
    There are three things in my opinion that speak against going with TDD:

    1. Many companies are agile, and the requirements constantly change, which makes implementing TDD even harder.

    2. TDD does not bring enough value to justify the investment of time (for writing & maintaining the test suites), the benefits are negligible, and the changes are often.

    3. Everything is subjective [1], and there's no reason to have such strongly held opinions about the "only right way to write code" when people write software in a way that is efficient for their companies.

    [1] https://vadimkravcenko.com/shorts/software-development-subje...

  • ajkjk 587 days ago
    I feel like TDD's usefulness depends very much on what type of code you're writing.

    If it's C libraries that fiddle that do lots of munging of variables, like positioning UI or fiddling with data structures... then yes, totally, it has well-defined requirements that you can assert in tests before you write it.

    If it's like React UI code, though, get out of here. You shouldn't even really be writing unit tests for most of that (IMO), much less blocking on writing it first. It'll probably change 20 times before it's done anyway; writing the tests up from is going to just be annoying.

    • mal-2 587 days ago
      Definitely agree. In the time it took you to mock your the state management, the backend endpoints, and the browser localStorage to isolate your unit, you probably could have written it in Playwright end-to-end with nothing mocked. The you'd actually know if your react code broke when the API changed, instead of pretending your out of date mock is still in sync.
  • gherkinnn 587 days ago
    > Testable code is best code. Only TDD gets you there.

    I smell a circular argument but can’t quite put my finger on it.

    > If it doesn’t work for you, you’re doing it wrong

    Ah. Start with a nigh unattainable and self-justifying moral standard, sell services to get people there, and treat any deviation as heresy. How convenient. Reminds me of Scrum evangelists. Or a cult.

    TDD is a great tool for library-level code and in cases where the details are known upfront and stable.

    But I can’t get it to work for exploratory work or anything directly UI related. It traps me in a local optimum and has draws my focus to the wrong places.

  • ahurmazda 587 days ago
    My beef with TDD is most every resource merely parrots the steps (red,green,..). No one teaches it well from what I have found. Nor am I convinced it’s easy to teach. I have picked up what I can by watching (what I believe) good TDD practitioners.

    I have a feeling this is where TDD loses out the most

    • gnulinux 587 days ago
      I practice TDD for the most part, I agree that it's not easy. E.g. there are a lot of unanswered questions: What if you write the test first, red, write the code but it's still red. Could be your code wrong, could be your test wrong. If your test is wrong, do you go back and see the red? (I do). Do you test your tests? (I don't). Treating TDD like a formal system doesn't make any sense since it's meant to be a tool engineer can use as a heuristic to make judgements about the stage of the development.
    • twic 587 days ago
      Absolutely. Actually doing TDD is nontrivial, and it has to be learned.

      Most of the early learning was by pairing with people who already knew how to do it, working on a codebase using it. People learn it easily and fast that way.

      But that doesn't scale, and at some point people started trying to do it having only read about it. It doesn't surprise me at all that that has often been unsuccessful.

    • Jtsummers 587 days ago
      I mean, there's the actual TDD book by Kent Beck. It's pretty good, and only 240 pages. It was an easy one-week read for me, spread out in the evenings.
      • pramodbiligiri 587 days ago
        There’s TDD by Example, and there’s also “Growing Object-Oriented Software, Guided by Tests”.
  • ChrisMarshallNY 587 days ago
    I find some of the techniques espoused by TDD proponents to be quite useful.

    In some of my projects.

    Like any technique, it's not dogma; just another tool.

    One of my biggest issues with "pure" TDD, is the requirement to have a very well-developed upfront spec; which is actually a good thing.

    sometimes.

    I like to take an "evolutionary" approach to design and implementation[0], and "pure" TDD isn't particularly helpful, here.

    [0] https://littlegreenviper.com/miscellany/evolutionary-design-...

    Also, I do a lot of GUI and device interface stuff. Unit tests tend to be a problem, in these types of scenarios (no, "UI unit testing" is not a solution I like). That's why I often prefer test harnesses[1]. My testing code generally dwarfs my implementation code.

    [1] https://littlegreenviper.com/miscellany/testing-harness-vs-u...

    Here's a story on how I ran into an issue, early on[2].

    [2] https://littlegreenviper.com/miscellany/concrete-galoshes/#s...

    • twic 587 days ago
      > One of my biggest issues with "pure" TDD, is the requirement to have a very well-developed upfront spec; which is actually a good thing.

      Can you expand on what you mean by "a very well-developed upfront spec"? Because that doesn't sound at all like TDD as i know it.

      I work on software that takes in prices for financial instruments and does calcualtions with them. initially there was one input price for everything. A while ago, a requirement came up to take in price quotes from multiple authorities, create a consensus, and use that. I had a chat with some expert colleagues about how we could do that, so i had a rough idea of what we needed. Nothing written down.

      I created an empty PriceQuoteCombiner class. Then an empty PriceQuoteCombinerTest class. Then i thought, "well, what is the first thing it needs to do?". And decided "if we get a price from one authority, we should just use that". So i wrote a test that expressed that. Then made it pass. Then thought "well, what is the next thing it should do?". And so on and so forth. And today, it has tests for one authority, multiple authorities, no authorities, multiple authorities but then one sends bad data, multiple authorities and one has a suspicious jump in its price which might be correct, might not, and many more cases.

      The only point at which i had anything resembling a well-developed upfront spec was when i had written a test, and that was only an upfront spec for the 1-100 lines of implementation code i was about to write.

      So your mention of "a very well-developed upfront spec" makes me wonder if you weren't actually doing TDD.

      No argument about testing user interfaces, though. There is no really good solution to that, as far as i know.

      • ChrisMarshallNY 587 days ago
        You still had an “upfront” spec. You just didn’t write it down, and applied it in a linear fashion. I assume that you are quite experienced and good, so it was a good way to do things.

        I tend to be highly iterative at the design, and even requirements level, in my work. Not for the faint of heart. I often toss out tons of code. I literally have no idea how something will work, until I’ve beat on implementation (not prototype) code, rewritten it a couple of times, and maybe refactored it in-place. Then, I refine it.

        I seldom write down a thing (concrete galoshes). If I do, it’s a napkin sketch that can be binned after a short time, because it no longer resembles reality.

        I’m a big believer in integration testing, as early as possible. The way I do things, makes that happen.

        It’s the way I work. But an important factor, is that I tend to work alone. My scope is, by necessity, constrained, but I get more done, on my own, than many teams get done, and a lot faster, higher-Quality, and better-documented. I also write the kind of software that benefits from, and affords, my development style. If I was writing engine code, my way would be considered quite reckless (but maybe not that reckless. I used my “evolutionary design” process on my BAOBAB server -which I outline in [0], above, and that server has been the engine for the app I’ve been refining for the last couple of years. The process works great. It’s just a lot more pedantic and disciplined).

        If I work on a team, then the rules are different. I believe in whitebox interface, and blackbox implementation. That’s a great place to have TDD.

        > makes me wonder if you weren't actually doing TDD.

        I get that kind of thing a lot. I fail most litmus tests. It’s a thing…

        Lots of folks here, would defecate masonry at the way I work.

        I don’t have to please anyone during the process. They just see the end result; which is generally stellar.

        Since one thing that geeks love to do, is sneer at other geeks, I am sort of doing a public service. No need to thank me. The work is its own reward.

  • choeger 587 days ago
    The observation about the focus on unit tests is well-made. I think it's a crucial problem that stems from very good tools for unit testing and developers that are very familiar with these tools. It's then very simple to discard anything that isn't covered by these great tools.

    But here's an anecdote that explains why you'd always want integration tests (other anecdotes for other test paradigms probably also exist): imagine a modern subway train. That train is highly automated but, for safety reasons, still requires a driver. The train has two important safety features:

    1. The train won't leave a station unless the driver gives their OK. 2. The train won't leave the station unless all doors are closed.

    The following happened during testing: The driver gives the OK to leave the station. The train doesn't start because a door is still open. The driver leaves the train and finds one door blocked. After the driver removes the blockage the door closes and the train departs. Now driverless.

    I think it's crucial to view integration tests as unit tests on a different level: You need to test services, programs, and subsystems as well as your classes, methods, or modules.

  • lifeisstillgood 587 days ago
    "The code is the design" conflicts with "TDD".

    Write code first. If that code is the v0.1 of the protocol between two blog systems great ! you can do that on a whiteboard and it looks like design when actually it's writing code on a whiteboard.

    Now you know what to test so write the test, after writing the code.

    Now write the next piece of code.

    Do not at any time let a project manager in the room

  • GnarfGnarf 587 days ago
    I keep wanting to be converted to TDD, but I can't shake the feeling that I'd be writing half the code in twice the time.
    • joshstrange 587 days ago
      Yep, I'm pretty anti-test because I've yet to see it ever payoff at any company I've worked at. That said, I keep hoping I'll catch the bug, be converted, have it click in my head. On the surface it seems quite nice but it's always with trivial examples. When you are dealing with real code I've had testing fall apart very quickly and/or make refactors extremely painful. And on top of it all, you are writing twice as much code in a world that doesn't care that you wrote tests, meaning it takes you longer to do that same work
    • dbrueck 587 days ago
      That's accurate. Worse, as the complexity of the scenario you're trying to test goes up, not only does the cost of creating the test go up, the cost of maintaining it almost always goes up too.
    • twic 587 days ago
      Writing half the code sounds pretty good.
      • GnarfGnarf 587 days ago
        No, I didn't mean doing the job in half the code (which for sure is better). I meant writing half the code that needs to be written, and then writing the other half that needs to be written, in addition.
  • mehagar 587 days ago
    I think TDD is great in the ideal, but in reality I have only worked on legacy systems where TDD was not practiced from the start. Such systems are hard to fit TDD style tests into because modifying existing code often requires large refactoring to properly inject dependencies and create seams for testing. The catch-22 is that refactoring itself is prone to breaking things without sufficient testing.

    As a result, I often try to fit my tests into these existing systems rather than starting with the test and refactor the code under test to fit that shape. The only resource I've seen for dealing with this issue is the advice in the book "Working Effectively with Legacy Code", to write larger system tests first so you can safely refactor the code at a lower level. Still, that's a daunting amount of work when it's ultimately much easier for me to just make the change and move on.

  • shadowgovt 587 days ago
    Where I come from, unit-test-driven-development tends to be a waste of resources. The interfaces will change so much during development that anything you write initially is guaranteed to be torn up. The one exception is if you're writing an interface that crosses teams; for "ship your org chart" reasons, we not only can, but must assume that interface is stable enough to mock and test against (and a knife-fight is necessary if it isn't).

    However, getting the client to agree that their design feature is satisfied by a specific set of steps, then writing software that satisfies that request, is a form of test-driven-development and I support it.

  • madsbuch 587 days ago
    To me, it really depends:

    1. Writing frontend code -- I've left testing all together. I'd never hope to keep up with the pace

    2, Writing APIs -- rudimentary testing that at least catches when I introduce regressions

    3. Writing smart contracts -- magnitudes more test than actual code.

  • danieltanfh95 587 days ago
    Trouble with TDD is that it doesn't fit product development.

    1. TDD isn't faster than simply writing test cases down, and executing them manually, especially when UI is involved.

    2. TDD quadruples the amount of work needed for any given change.

    3. TDD is the opposite of agile where you try to ship some product to the user ASAP to get feedback before spending time to refactor and clear technical debt. Write tests only for features that are confirmed so you don't spend time and effort on stuff people don't want.

    4. Similar point as 1, but you need to evaluate if making something easy to test is worth the time savings than just running the test manually.

  • sirsinsalot 587 days ago
    There's a lot of conflating unit-testing/TDD and QA here.

    Yes, when you start writing code, it may not be well defined, or you (the coder) may not understand the requirement as intended. That's OK.

    Write your test. Make clear your assumptions and write the code against that. Now your code is easier to refactor and acts as living documentation of how you understood the requirement. It also acts to help other engineers not break your code when they "improve" it.

    If QA, the client or God himself decides the code needs to change later, for whatever reason, well that's OK too.

    • wvenable 587 days ago
      > Now your code is easier to refactor

      Unless you need to change the design of the interface in any way -- then it's harder. Tests lock a particular interface in place -- which is great if you have a well defined interface. But if you're trying to figure out that interface then you've prematurely locked yourself in.

      • sirsinsalot 587 days ago
        True, easier to refactor within the interface.

        I'd argue if your interface changes and it's a chore to refactor or rewrite the test then there's other tech debt issues going on.

        • wvenable 586 days ago
          In my opinion, a lot tech debt comes from not changing the interfaces when they need to be changed. Tests fix your design so if your design is broken you're going to live with that.

          But when I'm developing something new, I will frequently change the interface until I get something to be what I want to it be.

  • Tainnor 587 days ago
    I think a lot of distinct, but interrelated topics are being brought up here:

    * Using tests as a (or even the primary) design tool (strong TDD)

    * Test-first development (weak TDD)

    * Integration vs. unit testing

    * What is a unit test?

    * Should one use mocks and if so, when and how?

    I think each of these topics merits a separate discussion and you can be e.g. in favour of at least weak TDD while maintaining that unit tests have little value, or you can be in favour of unit tests but disagree that "unit test" means "unit = class/method/function". You can have differing opinions on the value of mocks even if you subscribe to strong TDD (that's essentially the classicist vs. mockist divide in the TDD scene - for example, Bob Martin is more skeptical of mocking than, say, the "Growing Object-Oriented Software, Guided by Tests" crowd is).

    IMHO, the biggest problem with testing is that most developers are not very good at it. I routinely see tests that are so complicated that it becomes very hard to understand, let alone debug them. In my experience, a lot of people also skip tests when reviewing code.

    Well-written tests make a code base a joy to work with. Bad tests make everything painful. I don't know how to fix this, but we should pay more attention to it. If we had better tests, it would be easier to argue about the merits of TDD, unit testing, mocking etc. With badly written tests, everything devolves into a "why even test [this specific thing]?" kind of discussion.

  • pdimitar 587 days ago
    I don't have complicated feelings towards TDD at all.

    It has a good idea but as many others have said, you need a pretty well spec'ed software beforehand for it to work. When you code a certain piece you might change it, top to bottom, several times -- we aren't perfectly thinking machines and we need to iterate. Having to re-prototype tests every time hurts productivity not only in terms of hours -- an obstacle that can be overcame in a positive environment and is rarely a true problem. It hurts in terms of it demotivating you and you losing the creative energy you wanted to devote to solving the problem.

    The "it depends" thing will always be true. When you gather enough experience you will intuitively know the right approach to prototyping + testing an idea.

    TDD is but one tool in a huge toolbox. Don't become religious over it.

    I liked part of Kent Beck's writings back in the day but I am inclined to agree with other posters that he mostly wrote books to sell courses. I mean the book had good content, don't get me wrong, but they also didn't teach you much except "don't do waterfall".

    Martin Fowler also wrote some gems, especially "Refactoring", but in the end he too just tried to enrich your toolbox -- for which I am grateful.

    Ultimately, do just that: enrich your toolbox. Don't over-fixate on one solution. There is not one universal solution, at least we don't know it yet. Probably one day a mix of mathematical notation + a programming language will converge into one and we won't ever need another notation again but sadly none of us will live to see it.

  • benreesman 587 days ago
    I had no idea that people were quite so religious about this sort of thing.

    It’s pretty clear at this point that testing is one of the most valuable tools in the box for getting sufficiently “correct” software in most domains.

    But it’s only one tool. Some people would call property checkers like Hypothesis or QuickCheck “testing”, some people wouldn’t. Either way they are awesome.

    Formal methods are also known to be critical in extreme low-defect settings, and seem to be gaining ground more generally, which is a good thing. Richer and richer type systems are going mainstream with like Rust and other things heavily influenced by Haskell and Idris et al.

    And then there’s good old: “shipping is a feature, and sometimes a more important feature than a low defect count”. This is also true in some settings. jwz talk very compellingly about this.

    I think it’s fine to be religious about certain kinds of correctness-preserving, defect-preventing processes in domains that call for an extreme posture on defects. Maybe you work on avionics software or something.

    But in general? This “Minimal test case! Red light! Green light! Cast out the unbelievers!” is woo-woo stuff. I had no idea people took this shit seriously.

  • peteradio 587 days ago
    I write tests in order to have something to run and hit breakpoints on while I develop code. Is that TDD? The tests don't even necessarily check anything at the earliest stages, obviously they are red if the code barfs but that's about it. Once the code solidifies I may take some output and persist it to make sure it doesn't change, but "does not crash" is technically a testable endpoint!
  • gravytron 587 days ago
    TDD helps facilitates the process of building up of confidence in the product. However, the value it adds is contextual in the sense that if a product is not well defined and at a certain point of maturity then it may not be helpful to shift gears and adopt TDD.

    But fundamentally no one should ever be trying to merge code that hasn’t been unit tested. If they are, that is a huge problem because it shows arrogance, ignorance, willingness to kick-the-can-down-the-road, etc.

    From an engineering perspective the problem is simple: if you’re not willing to test your solution then you have failed to demonstrate that you understand the problem.

    If you’re willing to subsidize poor engineering then you’re going to have to come to terms with adopting TDD eventually, at some stage of the project’s lifecycle, because, you have created an environment where people have merged untested code and you have no way to guarantee to stakeholders that you’re not blowing smoke. More importantly, your users care. Because your users are trusting you. And you should care most of all about your users. They are the ones paying your bills. Be good to them.

    • srer 587 days ago
      > But fundamentally no one should ever be trying to merge code that hasn’t been unit tested. If they are, that is a huge problem because it shows arrogance, ignorance, willingness to kick-the-can-down-the-road, etc.

      Here you are asserting that unit testing is fundamental, and that not believing this is arrogance and ignorance.

      I'd suggest your view that your way is "the" way, is an ironic display of arrogance, and perhaps ignorance.

      And this perhaps I think is the core of much of the anti-TDD sentiment. It's not that we don't think TDD and unit tests are without their positives, it's that we don't like being told this is the one true way to write software, and if we don't do it your way we are engaging in poor engineering.

  • avl999 587 days ago
    What is frustrating is TDD evangelists insisting everyone do purist TDD in their regular development (like the guy being quoted in this article).

    You set your standards as a team of what you will consider acceptable tests and as long as the dev submitting the PR meets that standard why does it matter if they did TDD or not? TDD is a means to end, it's not a religion. As long as you write tests that meet the standard it doesn't matter when you write those tests.

    The level of micromanaging that TDD evangelists seem to want in people's workflows is infuriating. It's literally cultish.

    Edit: I realize this came across more negative and abrasive than I intended. I think TDD has some good parts (primarily around gamification of writing tests and giving a serotonin hit whenever a test goes from red to green). I practice TDD around 50% of the time when appropriate, but most people who have worked in the industry that TDD as sold by purists is impractical and adds negative value.

  • fasteddie31003 587 days ago
    The TDD tradition comes from dynamically typed languages. If you write a Ruby or JavaScript function it's got a good chance of not working the first time you run it. However, with statically typed languages your function has a much better chance of running, if it compiles. IMO TDD only makes sense for dynamically typed languages.
  • alfonsodev 587 days ago
    I think TDD shines in combination with a layered architecture, the combination of both increases “changeability” of the project. Layered architecture and DI makes easy and possible to test any layer. And the test become a live documentation of how to use the code you have written.

    Refactoring becomes then easy although sometimes tedious, but provides a degree of confidence that your change works with all posible ways to use the code.

    Its also healthy to break the rules if time pressure, but keeping a registry of technical debt, helps keeping the morale up. There is nothing more demoralizing that working on an environment where you can’t estimate because who knows what will be broken, it’s hard to make improvements because everything is entangled, and there is no time to invest in a rewrite. This usually ends up with very talent people quitting if they are unable to fix it.

  • metanonsense 587 days ago
    I always liked the discussion "Is TDD dead" between David Heinemeier Hansson (of Ruby on Rails and Basecamp fame) and Kent Beck. DHH arguing against, Kent Beck obviously in favor of TDD. Martin Fowler is moderator and the discussion is very nuanced and slowly identifies areas where TDD has its benefits and where it should be rather avoided. https://martinfowler.com/articles/is-tdd-dead/
  • 0xbadcafebee 587 days ago
    Why does TDD exist?

    1. We want a useful target for our software. You could design a graphical mock-up of software and design your software to fit it. Or you could create a diagram (or several). Or you could create a piece of software (a test) which explains how the software is supposed to work and demonstrates it.

    2. When we modify software over time, the software eventually has regressions, bugs, design changes, etc. These problems are natural and unavoidable. If we write tests before merging code, we catch these problems quickly and early. Catching problems early reduces cost and time and increases quality. (This concept has been studied thoroughly, is at the root of practices such as Toyota Production System, and is now called Shift Left)

    3. It's easy to over-design something, and hard to design it "only as much as needed". By writing a simple test, and then writing only enough code to pass the test, we can force ourselves to write simpler code in smaller deliverable units. This helps deliver value quicker by only providing what is needed and no more.

    4. Other reasons that are "in the weeds" of software design, and can be carefully avoided or left alone if desired. Depends on if you're building a bicycle, a car, or a spaceship. :-)

    But as in all things, the devil's in the details. It's easy to run into problems following this method. It's also easy to run into problems not following this method. If you use it, you will probably screw up for a while, until you find your own way of making it work. You shouldn't use it for everything, and you should use good judgement in how to do it.

    This is an example of software being more craft than science. Not every craftsperson develops the same object with the same methods, and that's fine. Just because you use ceramic to make a mug, and another person uses glass, doesn't mean one or the other method is bad. And you can even make something with both. Try to keep an open mind; even if you don't find them productive, others do.

  • worik 587 days ago
    Testing is very important. Ok.

    The problem I have with TDD is the concept of writing tests first. Tests are not specifications (in TDD world the line is blurred.) Tests are confirmation.

    I develop my code (I write back end plumbing code for iOS currently) from a test frame work.

    My flow:

    * Specify. A weak and short specification. putting too much work into the specification is a waste. "The Gizmo record must be imported and decoded from the WHIZZBAZ encoding into a Gizmo object" is plenty of specification.

    * Write code for the basic function.

    * Write a test for validity of the code (the validity of the record once loaded in the Gizmo/WHIZBAZ case)

    But the most important tests are small micro tests (usually asserts) before and after every major section (a tight loop, a network operation, system calls etcetera). More than half my code is that sort of test.

  • daviding 587 days ago
    The 'T' in TDD stands for design. :) The name of it has always hurt the concept I think.

    In my experience TDD uptake and understanding suffers because a lot of developers are in a context of using an existing framework, and that framework sort of fights against the TDD concepts sometimes. Getting around that with things like dependency injections, reversals etc then gets into the weeds and all sorts of 'Why am I doing this' pain.

    Put another way, a lot of commercial development isn't the nice green-field coding katas freedom, it's spelunking through 'Why did ActiveRecord give me that?' or 'Why isn't the DOM refreshing now?'. Any friction then gets interpreted as something wrong about TDD and the flow gets stopped.

  • nestorD 587 days ago
    There are quantitative studies showing that TDD has little to no impact development time or code quality[0]. What has been found, however, is that writing code in short increments helps a lot (something that can be caused by using TDD).

    For more information on studies covering the topic (and much more), I highly recommend watching Greg Wilson's Software Engineering's Greatest Hits[1].

    [0]: https://neverworkintheory.org/2016/10/05/test-driven-develop... [1]: https://youtu.be/HrVtA-ue-x0?t=448

  • cannam 587 days ago
    This is a good article, with (for me anyway) quite a twist at the end.

    The author quotes a tweet expressing amazement that any company might not use TDD, 20 years after it was first popularised - and then writes

    "I’d equate it to shell scripting. I spent a lot of time this spring learning shell scripting"

    Wow! I feel like the person in the tweet. It's amazing to me that someone could be in a position to write an article with such solid development background without having had shell scripting in their everyday toolbox.

    (I use TDD some of the time - I was slow to pick it up and a lot of my older code would have been much better if I had appreciated it back then. I like it very much when I don't really know how the algorithm is going to work yet, or what a good API looks like.)

    • NohatCoder 587 days ago
      You can use a "real" programming language for anything more complicated than running a program with some parameters. Really, the only thing the various Shell variants have going for them is that you can type it directly into the console. For any lightly complicated programming task they are abysmal languages.
      • cannam 587 days ago
        Quite right! But approximate experiments and lightweight automation are really useful in deciding where to go and then making sure you stay there. I'm all for test-first, but I'd find it very hard to argue that it's a more important tool than, well, scripting things.
        • NohatCoder 587 days ago
          Shell scripting is just one option for scripting, some popular (and IMO better) options are Perl, Python and JavaScript.

          I'm sure there are also people who use C for quick and dirty tasks. Seems weird, yes. But if that is the language you know best it may be the fastest in the short term.

  • t43562 587 days ago
    Is dogma really useful? No matter how sensible some strategy is, can we afford to treat it as an absolute truth?

    I get the feeling that we are all inclined to think that we have the entire story of development in our own personal heads and can therefore lay down laws.

    ...and yet most of these disciplines need everyone's co-operation and one can feel that if you don't treat it as dogma then you're never going to make everyone "comply"...

    I think the fact that TDD (and other popularly debated methodologies) haven't taken over just by being obviously easier and better is a sign that they aren't really suitable for being made into dogmas. They're tools and we should have the choice like any workman to choose them or not.

  • bbarn 586 days ago
    TDD is another tool in the toolbox. It has it's place, and combined with good tooling, can make for a great development experience. I use it mostly when adding features to an existing code base. In C#, with modern tooling like Visual Studio, Rider, or ReSharper, you can use your test as a base to start scaffolding methods out with auto generated code, and that can end up being a time saver.

    For a brand new product, I'm almost never using TDD. I'm building out a solution in the pattern I want, getting some minimal feature or features up, and then I write tests appropriate to that pattern. Later on I might use TDD to keep working on it, but it can be a burden at the start of projects.

  • fbrncci 587 days ago
    Working on API, Services and a lot of automation. At some point I got really into the habbit of TDD. Now I just can't go without it anymore. When I am thinking of a feature, I am always thinking of the test first. It has gotten to the point where it feels like I am walking around naked when I write test-less code/features. It's not just that I have gotten used to it, because whenever I coded myself into a corner, without tests, it seemed like the answer was having tests first. At least, it would have caught a lot of the issues I would have run into later.

    https://www.youtube.com/watch?v=iwUR0kOVNs8

  • andy_ppp 587 days ago
    Me too but let’s play devils advocate here.

    1) you likely aren’t going to get the job without some absurd degree of unit testing

    2) most of your code should be pure functions making them trivial to unit test

    3) writing pure functions and making your code testable makes your system less coupled which is a good thing

    4) designing the tests is designing the software which is often helpful

    That’s it I don’t have a fifth point. Actually I do, writing software is a form of art and it cannot be summed up as simply as unit testing everything always bad or always good. There might be somethings like converting an engine performance simulation to Golang from Excel that might be fantastic to unit test the shit out of, testing if onPress works on your button component is basically pointless.

  • sdevonoes 587 days ago
    The first time I encounter TDD, I was a bit surprised because it encourages you to write tests (client code) first. Well, I started my professional career without knowing about TDD, but I did know that usually it's best to start writing the client code first. E.g., you write your main.c/Main.java/main.go first, referencing classes/code that do not exist yet, and wiring everything together. Then you move on to the next layer, write code that should exist but still relying on future code that doesn't exist yet. Eventually you end up writing the whole thing. Sometimes the other approach works equally well (e.g., starting from the small blocks and going up).
  • n4jm4 587 days ago
    TDD's contribution to software quality scrapes the bottom of the barrel. Attention to detail in scalable design, formal verification, fuzzing, and mutation testing offer deeper guarantees of successful operation. But of course, the American ideal "make money" is worn proudly on the rim of management noses. It's the wrong prescription, but they're too busy counting their bills to care. This is evident especially in cybersecurity, where the posture amounts to silent prayer that no one stumbles across their JRE 1.0's and their Windows XP's and Google's latest attempt at a programming language with buffer overflows by design--batteries included.
  • erdos4d 587 days ago
    I think TDD is just busywork in another form. Managers seem to like to keep people busy, often just to watch them work and feel power or something from it. They might have a really good dev on the team who can smash everything in front of them super quickly and they can't keep them busy enough to get their power trip on, so they add a boat anchor and tell the dev to drag that around while they work. Instant slowdown in productivity, dev takes 3X as long, manager is loving it. I think that's also why you find this junk in megacorps where there is little actual work and everyone is politicking all day. Lotta power trippers in those companies.
  • GuB-42 587 days ago
    I never really understood how TDD can make software better, except for one thing: it forces people to write tests. But that's just a discipline thing: tests are boring and development is fun, you have to deserve your fun by doing the boring part first.

    It also makes cutting corners more difficult, because it is possible to have (sort of) working software without testing, but you can't have working software if the only thing you have are failing tests (the important first step in TDD). Most TDD people probably thing of that as a positive, I don't. Sometimes, cutting corners is the right thing to do, sometimes, you actually need to write the code to see if it is viable, and if it is not, well, you wasted both the tests and the code, not just the code.

    But I don't think it is the only problem with TDD. The main problem, I think, is right there in the name "test driven". With a few exceptions, tests shouldn't drive development, the user needs should. Test driven development essentially means: write tests based on the users need, and then write code based on the tests. It means that if your tests are wrong and your code passes the tests, the code will be wrong, 100% chance, and you won't notice because by focusing on the tests, you lost track of the user needs. It is an extra level of indirection, and things get lost in translation.

    Another issue I have noticed personally: it can make you write code no one understands, not even yourself. For example, your function is supposed to returned a number, but after testing, you notice that are always off by +1, the solution: easy, subtract 1 to the final value. Why? dunno, it passes the tests, it may even work, but no one understands, and it may bite you later. Should I work like that? Of course not, but this is a behavior that is encouraged by the rapid feedback loop that TDD permits. I speak from experience, I wrote some of my worst code using that method.

    If you want an analogy of why I am not a fan of TDD: if you are a teacher and give your students the test answers before you start your lesson, most will probably just study the test and not the lesson, and as a consequence they will most likely end up with good grades but poor understanding of the subject.

  • osigurdson 587 days ago
    One thing that I have observed is, there is often a problem dependent sweet spot for the integration level of a test suite. Sometimes that is literally testing the behaviors of every method while other times it can be end-to-end testing or somewhere in-between. The challenge is, it can take a great deal of thought to arrive at the appropriate inflection points. One approach I take is to try to think about what suite of tests would make it easier for a developer who is new to the code base to be productive in it. They should feel that the test suite is helping them, not holding them back.
  • lynndotpy 587 days ago
    IMO TDD should be by opportunity and not policy. That solves basically all the problems I have with it.

    TDD is great because it forces you to concretize and challenge assumptions, and provides a library of examples for new devs to a codebase.

    • righttoolforjob 587 days ago
      You are arguing for having tests and good coverage, not for doing TDD.
      • lynndotpy 587 days ago
        No, I am arguing for TDD, specifically, writing tests before code. It feels like a superpower when it works. Maybe that's for a whole program or only small parts of it.
  • kazinator 587 days ago
    I don't understand where/how in TDD you are allowed to switch from concrete "base case" tests to tests which probe the inductive hypothesis.

    It seems that TDD can forever evade actually solving a problem in its general form, always just extending the number of concrete cases that work.

    For instance, a function to measure the length of a string first works only for the case len("") == 0; the result is wrong for all else. TDD allows this to be extended into "working" in idiotic steps like len("a") == 1, but len("b") returns 0, and so on.

    Also, how, in TDD, can we write a test which says "for any input not handled by the tests developed so far, I want an exception". That is to say, when I write the first test len("") == 0 and get it to pass, I don't want len("a") to return 0; I want it to throw. I could write a test for that: throws(len("a")), and it would initially fail. But I want the behavior to be entirely general, and I don't want to maintain the test when len("a") changes to returning 1.

    These problems make TDD just look like a way to get useful work out of complete morons, in problem areas involving calculating functions that have finite domains that can be exhaustively tested.

    As soon as you write a code that is more general: which makes more than the new test case to pass, you're leaving TDD. For instance, the test case wants len("a") == 1, but you write code such that len("b") == 1 would also pass, and len("abc") == 3 would also pass and so on. You now have a lot of useful and good behavior that is not tested. You've not had a len("abc") == 3 which went from red to green.

    Once other code starts relying on len being a reliable length function, you must have left TDD behind. Code is calling len("foo.txt"), which has not been tested! How realistic is to prevent that?

    At some point, a supposedly TDD-developed program must handle real-world inputs, and they could not all have been tested, because that's the reality. Only simple functions with small finite input spaces can be exhaustively tested. TDD must necessarily allow a lack of testing to creep in, and the rules for that, if any, are ad hoc.

  • _greim_ 587 days ago
    I've chosen to interpret TDD as "test driven design", based on the idea that systems designed to be easily unit-testable tend to also be easier to understand, maintain, extend, compose, repurpose, refactor, etc.

    I deviate from some proponents in that I think this kind of TDD can be done while writing zero unit tests, but in practice they keep you sensitized to good design techniques. Plus the tests do occasionally catch bugs, and are otherwise a good forum to exercise your types and illustrate your code in action.

  • varispeed 587 days ago
    What I do is not really pure TDD. I usually don't have a very clear specification of what system needs to be doing (as it is an iterative process). So I write the code and then write tests to see if it gives required outputs for given inputs. Then I write tests to see if it behaves correctly under edge cases. I also pretty much stopped using debuggers because of that. Simply there is no need. I can reproduce an error using the test and then fix the code until it passes it.
  • lakomen 586 days ago
    I don't want to write tests, why? Because most of them are like x = 1; if x != 1 panic(); If you don't trust the language, why do you use it?

    But then there are tests that make sense but are hard to write.

    And then there are tests that require infrastucture.

    I don't write tests for every little thing. But I do write them if I actually do want to test the functionality of what I just wrote. But stuff like

      s := new (Service)
      if s == nil {
        t.Fail()
      }
    
    is completely unnecessary
  • pkrumins 587 days ago
    My advice is to follow the famous quote “given enough eyeballs all bugs are shallow”. Add a “send feedback” link in your application and let your users quickly and easily notify you when something goes wrong. My product has several million users and has zero tests and when bugs get pushed to production, users tell me in seconds. Sometimes pushing bugs to production is part of my workflow and then quickly fixing them allows me to iterate at record speeds.
  • andersonvom 587 days ago
    I think people sometimes forget that tests are made of code too. If it's possible to write bad code, it's certainly possible to write bad tests. And writing bad tests first (as in `test-driven`) won't make them any better. At some point, people see bad tests _and_ bad code together and instead of blaming it on the "bad" part, they blame it either on the tests, or on the fact that the tests were written first.
  • davesque 587 days ago
    It seems to me that I began hearing a lot about TDD during an era of abundant web development work in the early 2010s. I think that kind of work lends itself well to TDD since there are a lot of well established design principles and conventions in that space. But it doesn't work as well in other more general software design contexts that are more open ended.
  • stuckinhell 587 days ago
    TDD is a great example of where major differences between businesses and departments has direct impact on your software engineering.

    When business people don't know what they want, do not try TDD. It will be a waste of time. When people do KNOW, or you have a RELIABLE subject matter expert (at a big company you might have one of these), TDD is a lot safer and easier to do.

  • matchagaucho 587 days ago
    My reluctance to do pure "red-green-refactor" is more a side-effect of the IDE than the testing philosophy.

    Maybe it's an OCD thing, but I don't like seeing compiler errors of unimplemented pseudo-code and mock placeholders. It breaks my flow.

    But 2 files open at all times, writing tests as the main class is being developed? And no compiler errors? Love it.

  • yuan43 587 days ago
    > ... I practice “weak TDD”, which just means “writing tests before code, in short feedback cycles”. This is sometimes derogatively referred to as “test-first”. Strong TDD follows a much stricter “red-green-refactor” cycle:

    > 1. Write a minimal failing test.

    > 2. Write the minimum code possible to pass the test.

    > 3. Refactor everything without introducing new behavior.

    > The emphasis is on minimality. In its purest form we have Kent Beck’s test && commit || reset (TCR): if the minimal code doesn’t pass, erase all changes and start over.

    An example would be helpful here. In fact, there's only a single example in the entire article. That's part of the problem with TDD and criticisms of it. General discussions leave too much to the imagination and biases from past experience.

    Give me an example (pick any language - it doesn't matter), and now we can talk about something interesting. You have a much better chance of changing my mind and I have a much better chance of changing yours.

    The example in the article (quick sort) is interesting, but it's not clear how it would apply to different kinds of functions. The author uses "property testing" to assert that a sorted list's members are of ascending value. The author contrasts this with the alleged TDD approach of picking specific lists with specific features. It's not clear how this approach would translate to a different kind of function (say, a boolean result). Nor is it clear what the actual difference is because in both cases specific lists are being chosen.

    • tikhonj 587 days ago
      There was an example of what it means to write minimal code to pass a test with QuickCheck which illustrates pretty much exactly the stuff you quoted.
  • brightball 587 days ago
    As with anything, there's going to be a group of strict adherents, strong opposition and the set of people who have used it enough to only apply it where useful.

    It's definitely useful, but those strongly opposed often won't use it at all unless it mandated which tends to lead to strict adherence policies at a lot of companies.

  • he0001 587 days ago
    One thing with TDD is that the code you are writing, you know, is testable. It’s also easy testable code, as it’s already written in such way. Code which isn’t written with TDD may be testable but more often than not it’s hard to test it. And code that’s hard to test will not be tested. And that’s a slippery slope.
  • jboy55 587 days ago
    I feel like I've never seen a project that I liked, where I surprisingly discovered it was developed using TDD.

    I have seen only a handful of projects, show as examples of TDD, that actually were projects I liked.

    Its a variation of the "don't worry they'll tell you" joke. How do you know a project was TDD?

  • jldugger 587 days ago
    > You write more tests. If writing a test “gates” writing code, you have to do it. If you can write tests later, you can keep putting it off and never get around to it. This, IMO, is the principle benefit of teaching TDD to early-stage programmers.

    Early stage programmers and all stages of project manager.

  • silentsea90 587 days ago
    It's odd how much time software engineers will spend on discussing the same old boring stuff like TDD. There are a thousand flowers blooming in cryptography, ML/AI, cryptocurrencies etc. Yet here we are with yet another rehash of the same discussion
  • gregors 587 days ago
    Write the code you wish you had. Define your expectation. Does your coding ability keep up with your expectations? Does that continue to hold for you or anyone else on your team on your worst day?

    Don't have any expectations and are exploring? Don't do any of this.

  • holoduke 587 days ago
    I believe a good way of programming is to always reverse program. So you start with the output and work back to where the algorithm or program starts. In that way you can easily extract an unit test after you finished your task.
  • majikandy 586 days ago
    Some people do tests, some people do development, and some people do test driven development.

    If you are picking up some code to work on, the nicest to work with is the one that was done with TDD.

  • Graffur 587 days ago
    I laugh every time I see someone trying to push TDD as the one way to write software. If it is useful to you.. then go ahead. Don't try push it on everyone.
  • CornCobs 587 days ago
    Slightly off topic, but I find the author's attempt to use an iterative method (TDD) to derive a recursive algorithm (quicksort) somewhat comical
  • kjgkjhfkjf 587 days ago
    I generally expect to see decent tests along with code in the same PR. I don't care whether the tests were written before or after the code.
  • jmconfuzeus 587 days ago
    I noticed that proponents of TDD are mostly consultants who sell TDD courses or seminars.

    You rarely see someone who writes production code preach TDD.

    Something fishy there...

    • righttoolforjob 587 days ago
      This is true. There are engineers who practice TDD as well, although quite few in my experience. The code I've seen come out from hardcore TDD is utter crap, because what truly matters is a good design. In fact writing tests first by design produces crappily designed and messy code and by extension crappily designed tests. Hence you end up with crap all over.
      • majikandy 586 days ago
        > writing a test first by design produces crappily designed and messy code?

        So you don’t get a say in the test or the code? It just messes itself up?

    • majikandy 586 days ago
      Interesting thought, but every evangelist of TDD that I’ve met has nothing to gain other than wanting the code in production to be cheaper to write, more maintainable, proven to work, faster to market, the list goes on. They often care more about the business and the costs involved in software development and are usually pretty selfless.
    • salawat 587 days ago
      People who write production code generally have QA teams they yeet the testing burden to.

      Said teams absorb a lot of the pain of Soft Devs who don't bother even running their own code.

      • joshstrange 587 days ago
        That's fair but let's not pretend QA is even in the same ballpark as tests, QA is about a billion times more useful. First and foremost because they didn't write the code.

        I believe strongly that developers make horrible tests because it's a completely different mindset that is not easy to switch between and near impossible if you wrote the code yourself that you are testing. We tend to lean into the "happy path" and don't even consider "outlandish" things a user might do. I have immense respect for a good QA person and I've had the privilege of working with a number of them, some I'd hire in a heartbeat if it were up to me.

        I'm not 100% anti-testing but the only testing I've seen produce real results was automated browser testing, and that was only after 2 very large attempts by the development team to do it. Finally we brought in someone whose sole job was the automated browser testing suite. In a very short amount of time he had something working that was producing useful results, something our 2 previous attempts never did. I believe it was in part since he didn't work with or write any of the code, he had no preconceptions, he didn't "think" the way we did since we knew the inner workings. From this and other experiences I think QA and Dev should be 2 seperate groups/teams without overlap (don't have devs do QA, they don't like doing it and they just aren't good at it).

      • xtracto 587 days ago
        The "throw shit at the wall and see what sticks" programming technique.

        Several years ago I was part of a team of developers in an org that did not have a formal QA process. Code was "just OK" but we shipped working products. At some point we (the management) decided to add a QA step with formal QA Engineers (doing part QA automation and manual QA). As a result Engineers became sloppy and realized they could get "extra time" if they delivered half-assed code that had bugs that were caught by QA. That was painful.

      • majikandy 586 days ago
        Nah, the best teams don’t need QA.

        Caveat - because you bake the QA in. The code itself is already quality assured. Then QA resources can do exploratory testing which does carry a separate value.

    • ajkjk 587 days ago
      No way, this is false. Tons of engineers preach TDD.
  • throwaway1777 587 days ago
    I don’t. Never once seen TDD help more than it was a time sink. On the other hand writing lots of tests is great, but no need for TDD.
  • rodrigosetti 587 days ago
    TDD doesn’t work for the really interesting problems: you can’t achieve a deep creative solution through small mechanical improvements
  • agentultra 587 days ago
    I've been in software development for over twenty years.

    I have similar feelings about maximalism in a lot of areas.

    Many organizations producing software today don't share many values with me as an engineer. Startups aren't going to value correctness, reliability, and performance nearly as much as an established hardware company. A startup is stumbling around trying to find a niche in a market to exploit. They will value time to market above most anything else: quick, fast solutions with minimal effort. Almost all code written in this context is going to be sloppy balls of mud. The goal of the organization is to cash out as fast as possible; the code only has to be sufficient to find product-market fit and everyone riding the coat-tails of this effort will tolerate a huge number of software errors, performance issues, etc.

    In my experience practicing TDD in the context of a startup is a coping mechanism to keep the ball of mud going long enough that we don't drown in errors and defects. It's the least amount of effort to maintain machine-checked specifications that our software does what we think it does. It's not great. In other contexts it's not even sufficient. But it's often the only form of verification you can get away with.

    Often startups will combine testing strategies and that's usually, "good enough." This tends to result in the testing pyramid some might be familiar with: many unit tests at the bottom, a good amount of integration tests in the middle, and some end-to-end tests and acceptance tests at the top.

    However the problem with TDD is that I often find it insufficient. As the article alludes to there are plenty of cases where property based testing has stronger guarantees towards correctness and can prevent a great deal more errors by stating properties about our software that must be true in order for our system to be correct: queued items must always be ordered appropriately, state must be fully re-entrant, algorithms must be lock-free: these things are extremely hard to prove with examples: you need property tests at a minimum.

    The difficulty with this is that the skills used to think about correctness, reliability, and performance require a certain level of mathematical sophistication that is not introduced into most commercial/industrial programming pedagogy. Teaching programmers how to think about what it would mean to specify that a program is correct is a broad and deep topic that isn't very popular. Most people are satisfied with "works for me."

    In the end I tend to agree that it takes a portfolio of techniques and the wisdom to see the context you're working in to choose the appropriate techniques that are sufficient for your goals. If you're working at a startup where the consequences are pretty low it's unlikely you're going to be using proof repair techniques. However if you're working at a security company and are providing a verified computing base: this will be your bread-and-butter. Unit tests alone would be insufficient.

  • dingosity 586 days ago
    Meh. OP sets of a strawman that rarely exists outside Reddit message boards.

    TDD is the wind, it cannot be captured by your net.

    • dingosity 586 days ago
      Which is to say... It sounds like the OP inhabits an environment where design is decidedly un-ad hoc (the reference to TLA+ is a dead give-away.) Not everyone lives there. Not everyone approaches design the same way. People dissing the OP should chill. The OP should probably also chill. If someone tells you "you're doing it wrong," just ignore them. People who know what they're doing won't say "you're doing it wrong," they'll say... "Hey, that's different than how we initially thought this methodology would be used... you must have a different environment from what we're used to. Let's dig into this a bit more."
  • littlestymaar 587 days ago
    It looks like the human brain is wired up in a way that can turn anything into a religion
  • SkyMarshal 587 days ago
    TDD is just a huge ugly kluge to compensate for languages designed with inadequate internal correctness guarantees. So instead we have to tack on a huge infrastructure of external correctness guarantees instead. TDD is an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a strong type system.
    • Jtsummers 587 days ago
      Unless your language includes a proper proof system for the entire program logic (see Idris or SPARK/Ada for something close to this, though in the latter it can only work with a subset of the overall Ada language), you will need tests. Even in languages like Haskell, Rust, and Ada which have very good and expressive type systems tests are helpful for validating the actual logic of the system.
      • SomeCallMeTim 587 days ago
        Needing tests != TDD.

        Needing tests != Unit Tests.

        Adding larger system tests after the fact is perfectly reasonable. TDD wants you to write tiny tests for every square millimeter of functionality. It's just not worth it, and 99% of the value is to make up for shortcomings in dynamic languages.

      • AnthonBerg 587 days ago
        Agreed!; Personally, I am of the opinion that Idris for one is mature enough that there is no need to forego tools that have a proper proof system for the entire program. It’s feasible today.

        Idris is carefully and purposefully described by its creators as not production ready. Nonetheless, because of what Idris is, it’s arguably more production-ready than languages which don’t even attempt formal soundness to anywhere near the same degree. In other words: Idris is not a complete Idris. But! All the other languages are even less complete Idrises!

        Big old “personal opinion” disclaimer here though. –Let’s prove it’s not possible to use Idris by doing it! Shall we?

      • SkyMarshal 587 days ago
        Yes that’s true, and imho the objective should be to move as much of TDD as possible into the type system. Despite my OP maybe implying it’s binary, it’s not, and getting closer to that objective is just as worthy as getting all the way there. It’s still a hard problem and getting all the way there will take years or decades more experience, experimentation, research, and learning.
      • haspok 587 days ago
        Yes, you will need tests, but do you need 1. TDD? 2. Unit tests?

        I agree with Jim Coplien when he argues that most unit testing is waste. And TDD is even worse, because it is equivalent to designing a complex system purely from the bottom up, in miniature steps.

        • Jtsummers 587 days ago
          > And TDD is even worse, because it is equivalent to designing a complex system purely from the bottom up, in miniature steps. [emphasis added]

          What fool uses TDD to design? The second "D" is "Development". If people want to act foolishly, let them. Then come in later and make money cleaning up their mess.

          • MattPalmer1086 587 days ago
            Better design is one of the supposed benefits of TDD. The article nicely demolishes that view, and I agree fully with what it says.

            There is a small scale design benefit to writing tests, and that is simply that you always have a "user" of your code, even if it's only focused on tiny bits of it.

            But having said that, I get essentially the same design benefit from writing tests afterwards, or writing a test client, or writing user documentation. I usually discover code needs some design improvement once I have to explain or use it.

            • Jtsummers 587 days ago
              It leads to better design (is the theory), but it is not itself a design process. It's a development process. TDD doesn't replace the need to stop, think, and consider your design.
    • s17n 587 days ago
      And yet nobody has yet succeeded in creating a type system that is usable for representing all but the simplest constraints.
      • SkyMarshal 587 days ago
        We must have different definitions for “simplest constraints” then.
        • s17n 586 days ago
          No, just different definitions of "usable". Although it's worth noting that an average test is testing stuff that even the most advanced type system never could.
  • Lapsa 587 days ago
    damn it. hoped it's about that train game
  • m463 587 days ago
    TDD - test driven development
  • rybosworld 587 days ago
    Seems like the TLDR is: Well-intentioned patterns break down when taken maximally.
    • twic 587 days ago
      That's definitely part of it.

      The article is really quite good. Much, much better than the discussion here prepared me for!

  • gregmac 587 days ago
    The author defines two types of TDD: "weak TDD" and "strong TDD". I'd argue there's another, though I'm not sure what to call it -- "Pragmatic TDD" perhaps? What I care about is having unit tests that cover the complicated situations that cause bugs. I think one of the main problems with TDD is its proponents focus so much on the process as opposed to the end result.

    The way I practice "pragmatic TDD" is to construct my code in a way that allows it to be tested. I use dependency injection. I prefer small, static methods when possible. I try not to add interfaces unless actually needed, and I also try to avoid requiring mocks in my unit tests (because I find those tests harder to write, understand, and maintain).

    Notably: I explicitly don't test "glue code". This includes stuff in startup -- initializing DI and wiring up config -- and things like MVC controllers. That code just doesn't have the cost-benefit to writing tests: it's often insanely difficult to test (requiring lots of mocks or way over-complicated design) and it's obvious when broken as the app just won't work at all. Integration or UI automation tests are a better way to check this if you want to automate it.

    I strive to just test algorithm code. Stuff with math, if/else logic, and parsing. I typically write the code and tests in parallel. Sometimes I start writing what I think is a simple glue method before realizing it has logic, so I'll refactor it to be easy to test: move the logic out to its own method, make it static with a couple extra parameters (rather than accessing instance properties), move it to its own class, etc.

    Sometimes I write tests first, sometimes last, but most often I write a few lines of code before I write the first tests. As I continue writing the code I think up a new edge case and go add it as a test, and then usually that triggers me to think of a dozen more variations which I add even if I don't implement them immediately. I try not to have broken commits though, so I'll sometimes comment out the broken ones with a `TODO`, or interactive rebase my branch and squash some stuff together. By the time anyone sees my PR everything is passing.

    I think the important thing is: if you look at my PR you can't tell what TDD method I used. All you see is I have a bunch of code that is (hopefully) easy to understand and has a lot of unit tests. If you want to argue some (non-tested) code I added should have tests, I'm happy to discuss and/or add tests, but your argument had better be stronger than "to get our code coverage metric higher".

    Whether I did "strong red-green-refactor TDD" or "weak TDD" or "pragmatic TDD" the result is the same. I'd argue caring about how I got there is as relevant as caring about what model of keyboard I used to type it.

  • righttoolforjob 587 days ago
    TDD is really, really bad. I won't even add arguments. TDD is typically sold by Agilists from which most content deserves to go in the same trash bin. Most of these people have never written code for real. Their opinions are worthless. Thanks, bye.