I get the feeling that the "wasted effort" in the second approach is about rewriting code.
Rewriting code is good in my experience. It's always better the second time around (or third, or fourth). It's not that my first attempt was rubbish, it's that I didn't understand the problem as well as I did the second time (and so on).
Lean is also about avoiding premature optimisation. Which is hard because it cuts against the grain of our engineer sensibilities. Doing something "good enough for now" is tough, when you know that with just a few more days' effort you could make it bulletproof. But I've had to delete "bulletproof" code so many times, because it turns out the product didn't need that feature, or it needed to work differently.
In the long term, Lean avoids more wasted effort, in my experience.
There's a difference between Lean and Instant Legacy Code.
The key difference is focus on simplicity including cyclomatic, compositional etc.
Simple is not the same as simplistic.
Lean as a methodology is meant to remove obstructions and waste in manufacturing. Effort is not a thing that can be wasted. Excess code is the waste. Fixing bugs is wasted time compared to not having them.
"Good enough for now" is a great excuse to keep hacks around and letting them accumulate to the point where code is unmaintainable. Meaning too much code, meaning waste. Even better is "it works now, do not touch" especially when current code base is untested.
Programmers are typically lazy and do not bulletproof anything ever. Thus rampant security issues.
The alleged wasted effort is from the point of view of some manager who doesn't get to tick boxes quicker. (And disregards later massive drop in development velocity while presumably demanding same results.)
This means spotted issues are pushed towards never unless a customer reports them. Which they won't or even can't so you get your software brand recognized as buggy trash - with workarounds being commonly peddled among users and devops.
This is probably out of place, but are you OK? I burnt out about a decade ago, and would have agreed with a lot of what you're saying back then. It took me years to get back into a happy place with tech.
Dysfunctional teams and organisations produce this kind of cynical rage, not Lean.
Dysfunctional companies produce buzzword laden or metric based management. (As opposed to good software.)
Lean is not a software development methodology. It is made for factories and production lines, a terrible fit for most kinds of software. The only salvageable parts from it is iteration and listening to frontline workers to get process improvements.
"Autonomation"/Poke Yoke as in automated tests.
Which is not enough of a methodology.
The "Lean for software" page gives contradictory definition of waste - you're supposed to minimize defects while at the same time minimizing rework. I'd like a crystal ball that enables it. Plus you cannot apply it without absolute control over the whole development process. Any place that is a black box (say, both set of features and deadlines are given) the process. Thus it fails in corporate environment.
Likewise, general agile methodologies are easily perverted into what I just described - by skipping refactoring and redesign parts in service of deadlines.
That model works only if you throw things away like startups do or the project is small and self-contained.
Usually small projects are low value or grow big. C.f. Twitter or YouTube when it started and now. Even worse if you get to interact with quickly changing parts controlled by another team you do not control in even a medium sized project.
I agree with everything you said, but I must point out that Lean comes from Toyota. While most people know about the production system, which is indeed applied to factories, Toyota applies this to the product development too.
Product development is much closer to what software development is.
Unfortunately there is a lot of misinformation on the topic, but there is a very good book, called "Toyota Product Development System", which describes how Lean is applied there. There is an insane amount of valuable information in there, every software company should be at least aware of those engineering practices.
This is a common view these days. But as a technical founder, I disagree with this view. Once you launch the bare minimum first version, there are different customer segments who pull you in different directions. Often you chase one path and find that the customer segment is not lucrative enough and chase the next. Till you find a compelling usecase with money making potential, you will end up with various set of features used by different customer segments.
This might still be less wasteful when compared to building an entire product and finding no customers. But, it is taxing on the technical founder! Lean washing shouldn't set a wrong expectation for the technical founder involved in startups that follow the rapid prototyping approach.
man, I feel your pain. I've been through this a few times. Just about to go through it again. Yes it's tough. I keep hoping that there's a non-tough bit later when most of the technical problems are solved and we get to sit back a bit and watch the business folk hustle. Haven't found that point so far though
Maybe the OP needs to clarify what type of "problems" exactly he is referring to..the examples he gives point to problems requiring significant engineering efforts, which is very different from the examples you note above.
The difference is how well defined the problem is. Clojure, Datomic, and Git all benefited from predecessors that were used extensively and had definable shortcomings with a clear enough technical solution.
Yes. I would say people or teams which developed solutions that look as if they "just appeared out of careful consideration" did not -- they simply learned from others and took in that prior world knowledge. The solution space for most problems is not "one global optimum" -- use cases have to be taken into account (which is why great projects like Clojure are not used for everything).
I would go so far to say there isn't actually a dichotomy here -- you should be swapping between launching something with a hypothesis (in lean mode) then gathering feedback and considering alternatives as you are proven correct/incorrect (hammock mode). I think Galls Law  is also relevant here:
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
If all you do is think and think, then you open yourself to mis-timing a solution, feature and scope creep, and risking "unknown unknowns". If all you do is launch and incrementally iterate you'll be stuck solving very narrow problems.
Top-down generally only works for a well-understood problem domain, and even then it only holds up for focused projects where you have the power to declare what is in scope and out of scope in a very strong fashion. This works better for dev tools, libraries, middleware or other projects that are abstract and not tied too closely to a specific business or end-user goal. In other words, the more abstract the tool and the more technical the audience, the more likely that you can drive massive impact while maintaining a simple vision and avoiding all kinds of edge cases and incidental complexity.
I believe Git to be actually part of the 2nd than the first group.
Linus built the working prototype/self-hosted in 3 days mixing a lot of his learnings from bit-keeper and his knowledge of disk management .
To me, that's rapid prototyping. It's enough domain knowledge to make it work for himself well. He didn't spend a bunch of time thinking nor coming up with solution since he was actively building Linux at the time. The key is he employed the help of others to build Git and eventually take it over since he wanted to focus on Linux.
This all comes with a huge caveat in that Linus's 3 days == 1000 of mine. His 'just enough' knowledge is near expert level.
As others have asked, what are you trying to build? A technical solution or an end-user solution.
Technical solutions do require a lot more domain knowledge than a twitter/airbnb (at the early stages).
In the end, I believe in rapid prototyping and failing fast. Learn just enough, whether technical or end-consumer to launch fast.
The thing I agree 100% is though, don't break user-space . I believe this applies to end users of products, whether developers or customers. Once people start consuming something, don't break it. Doesn't matter whether you believe it to be 'correct' or a 'bug'. Expectation management of slow and easy depreciation.
Linus’s three days of coding effort was probably preceded by months/years of thinking abut the essence of version control for a large distributed development project (a la the Linux kernel). It’s difficult to just stumble on what became the internal structure of git in just a few days if you just started thinking about version control systems.
So, for me the distinction between the two approaches (prototyping -vs- hammock driven) is lately about whether you are solving a largely known/understood problem (equivalent to having domain expertise, in an absolute sense) -vs- solving problems to which you don’t know the answers. In the latter case, there is no shortcut getting around thinking time.
Or, as they say: “A month in the laboratory could save an hour in the library”
One is using a top-down approach versus an iterative approach. The other is about the nature of your problem: do you have product risk or market risk?
The lean approach is about eliminating waste, which, in the context of startups, often means building something small and talking to users. But that's only because most startups have market risk. If you have product risk, you should still iterate on your solution instead of building it in one go.
I feel like you are asking for examples where the market-risk was addressed. The most interesting companies would be those where the first test was a total miss and they solved a totally different problem in the end.
Wouldn't Lisp count for the top-down approach? Only that most of the thinking was done without thought spent on an actual implementation, and the actual implementation was done by different people who recognised the practicality of it?
Dark (https://darklang.com) was a bit of both. I spent several years thinking about the problem, and once I had a solution and decided to work on it, went into rapid prototyping to figure out if it could work. To a certain extent, we're still in that phase, just with a much bigger team now.
* Segment, started as a thumbs up/down tool for professors in lectures to work out when students are getting confused. They realised everyone just went to Facebook instead, then they wondered why they couldn't tell this when they were remote! https://www.youtube.com/watch?v=l-vfn97QTr0
At a strategy level, which one of the two points you end up doing depends on who commissions and evaluates the work. If someone hands you a spec and then disappears, only to come back 5 years later and expect to receive a finished project, you'll be working "top-down". If you're trying to solve someone's immediate problem with software, and are in regular contact with that someone, you'll be working in "rapid prototyping". All projects, software or otherwise, are spread around the spectrum between the two endpoints. Where exactly depends on specifics, but if you're starting a new project, the consensus is that you should aim to be closer to the "rapid prototyping" end.
At a tactical level, feature level, you mix both. You state your problem (or get it stated to you), you think it through, hopefully considering at least some related work and doing some hard thinking, come up with multiple possible solutions and evaluate their trade-offs... by implementing their prototypes as fast as possible, because that's the only real way to discover the trade-offs. Depending on how much in a hurry you are, you might pick the first prototype that isn't a total disaster and build your feature from it, then test it, and repeat.
See how "top-down" and "rapid prototyping" is interwoven here. This approach can be expressed as: think before you do, but remember that you only learn the true scope of a problem by attempting to solve it.
My personal avalanche detection project https://avanor.se (I haven't started the image uploads for this season yet), seems to fit the second model. It's very simple and small, but is successful in terms of being a prescribed tool for professional avalanche forecasting in Sweden.
I think it's on it's third rewrite or something right now, and runs circles around the only other service in this space regarding bang for the bucks (guess my budget, its smaller than that).
I like and have successfully used the Business Model Canvas approach. You fill in all assumptions. Test the riskiest one in the simplest manner, then move on. E.g. if you're not sure you can find the right partnerships, look for that first. If you're not sure the value proposition would work, interview some people, make some mockup PowerPoint slides, and so on.