#5 has a converse - oftentimes, the only way to get a rebuild to succeed is to drop features, and it's a major red flag if management insists on 100% feature parity.
The way to distinguish this from the #5 situation in the article is to ask if you're dropping features because they're hard or because nobody uses them. The former is a red flag; the latter is a green flag. Before you embark on a rebuild, you should have solid data (ideally backed up by logs) about which features your users are using, which ones they care about, which ones are "nice to haves", which ones were very necessary to get to the stage you're at now but have lost their importance in the current business environment, and which ones were outright mistakes. And you should be able to identify at least half a dozen features in the last 3 categories that you can commit to cutting. Otherwise it's likely that the rewrite will contain all the complexity of the original system, but without the institutional knowledge built up on how to manage that complexity.
> Before you embark on a rebuild, you should have solid data (ideally backed up by logs) about which features your users are using, which ones they care about, which ones are "nice to haves", which ones were very necessary to get to the stage you're at now but have lost their importance in the current business environment, and which ones were outright mistakes.
This is so important. I've been on many a project where, 3 months in, we wish we had historical tracking data on user activity to back up our instincts to cut a particular feature that seems worthless. The worst part? Even if you add it immediately, you'll have to wait 2-4 weeks to get a sufficient amount of data.
Yup; statistics are only part of the picture and value of a story. Compliancy is another one for example; sure, few people will use the 'download all my data' and 'delete my account' options, but they're mandatory for GDPR compliance and not offering them may cause a huge fine. There's a lot of these compliancy features.
We need case law to settle the matter but in general, the GDPR indicates that if you don't need to collect the data in order to perform the requested activity, you need explicit consent for collecting it, and will be held to a high standard in court if this every comes in to question.
Yes, but like the "cookie law" before it, it's absolutely fine to go ahead and do it if it's required (in the case of something like logging aggregate usage counts of APIs, that's easy to justify as a requirement for maintaining a reliable service; it's basic server monitoring).
Things like online stores using cookies to track a user's shopping cart across requests are completely fine, yet it seems like legal departments decided to be overly cautious and treat all cookies as potentially infringing. GDPR may be triggering similar reactions.
I wouldn't have a problem with that if marketing departments became equally cautious, but they seem to just slap on a banner and carry on as before :(
GDPR, I'm hoping that I don't have to bother my users with a "do you consent to" popup when the only thing I want to do is to log server-side the API calls so that I can see patterns in usage and such. If I were to show such a "do you consent to" popup users might mistakenly think I'm one of those techcrunchers with hundreds of data partners that all get to see your PII. I do not want to affiliate myself with those type of actors.
"The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. This Regulation does not therefore concern the processing of such anonymous information, including for statistical or research purposes."
As long as it's not linked to a particular profile ("pseudonymous" doesn't count, it could still be linked), it's fine.
One thing to be careful not to fall afoul of when you choose to remove features is assuming there is some kind of meaningful average user.
A good example is MS Office, there are an huge amount of features that only 5% of users might ever use, but the majority of users are likely to use quite a few of these niches individually, and if you remove all the low use features, you piss off basicly everyone.
I think the mistaken idea of an average user is why a lot of metrics driven software seems to get more and more useless with every update.
(I cant see the present/away status of contacts in the newest skype, really guys? )
I think it's important to separate feature improvements from a technical rewrite, ideally in the rewrite you mostly just make things work the way they did, sometimes you might fold a feature improvement into it but if you come out of the rewrite with a more stable product that has about the same usage stories you should consider it a success.
Sometimes you will want to fold features into a rewrite (remove prompting the user to confirm X twice) sometimes this will ease development and be worth it but other times it'll pay off to just retain the old functionality but add it to a list to be user tested later.
Once the tech is solidly over then take a swing at updating the poor UI, do it agiley so you can back out of changes that the user base rejects since (at least within my more modest usage studies) not everything people depend on comes up or gets reported. I'd much rather rollback a design feature branch then have users get change fatigue when you're forced to rollback your new shiny rebuild and the whole project ends up being shelved.
Almost all feature requests are asking to implement a particular solution rather than asking to come up with a solution to solve a particular problem.
The way I try to solve this is to ask "why?" as many times as it takes to get to a fundamental business problem. Then it becomes easier to have a user story (as opposed to a specific feature request) and come up with other solutions that can be measured against the story. It also helps to keep the product focused, as it's easier to tell when a story is not for your target market vs a feature request -- and then you can make a conscious decision to either stay away or deliberately expand to that market.
When this happens a couple times you start sounding like Honey from the Incredibles.
It’s difficult not to sound combative when they say they want a convertible but you have to wheedle out of them that they want to take a proverbial road trip through monsoon season. No, you get a Land Rover with a snorkel or you wait, pal.
So bossy and difficult. Why won’t you just give us what we asked for? These meetings would go so much faster.
I once worked on a feature that apparently lots of clients were asking for. It took 4 weeks to implement. Went to production. Never heard anything of it. 2 years later if we could modify the feature to work for another usecase. I looked at the database. The feature had never... ever... been used.... rows returned = 0
That's why its important not to believe what customers and product managers say what features they want. I have had a ton of occasions where it turned out that what they really wanted was totally different from what the devs were told.
Feature parity is the reason why some of the projects I've worked on caused #2 - can't get customers to switch if there's no parity yet. The MVP for some of those projects took a year to get to. Mind you it'd probably have been 6 months if they didn't opt to go for a microservices architecture.
There have been a couple times where I’ve tried to use a feature that should have been awesome but was terrible then it got pulled in a newer version of the product. It was incredibly frustrating to wait for a fix that never came. Data on what’s used is good but you need to get feedback about what sucks to go along with it.
"Red Flag #4: You aren’t working with people who were experts in the old system.”
I think this is most important. A lot of people want to rewrite because they don't understand the current system and don't want to bother learning. Before you rewrite you really should understand the current state deeply.
The way I've phrased something similar before is "don't do a full rewrite if you couldn't write up a plan for refactoring in place to fix the problems with the old system."
If you can build that plan, and make the case that it will be easier to do the full rewrite, go for it. But if you couldn't put together the fix-in-place plan, you might not understand everything the old system does well enough to actually estimate the size of a rewrite...
(This isn't solely for full-parity rewrites: if you're dropping features, what does that look like dropping from the old system?)
I was involved in a rewrite where it would have been much easier to refactor the old system.
A year into the process one of the c-level leaders pulled me into a room and asked why I couldn't fix the legacy code, and I basically told him that he should have pushed back on it. I couldn't fix the legacy code because that would be months of refactoring that should have been done instead of the rewrite.
Context: the legacy code had some design flaws that required major refactoring, but the legacy code "worked" except for very large deployments. The only problem was that the legacy system wasn't modular, so it didn't have unit tests and wasn't cross platform. All of those problems are easier to tackle via refactoring instead of a full rewrite.
> The way I've phrased something similar before is "don't do a full rewrite if you couldn't write up a plan for refactoring in place to fix the problems with the old system."
Hmm... there have been a number of times when I've banged my head against the wall trying to figure out how to make my own code do something, until I finally bit the bullet and decided to rewrite the entire chunk from scratch and suddenly it took a fraction of the time I had spent trying to fix it to get it written and working. Not sure how to reconcile this with the advice you gave.
Knowing what the system should do, in sufficient detail that there is nothing of significance to be discovered with regard to its requirements, while simultaneously not actually knowing enough about how it works to the point where you could plan how to refactor it, is quite a corner case in the field of legacy systems (the latter is quite commonplace, but the former is almost unheard of.)
Yep, seen that. I worked on a system where the company did not really want a reimplementation but they destaffed a project in one site and reconstituted it with all new people at another site. The new people decided to rewrite from scratch. A year and a half later I start getting questions by email from the new people, questions indicating that not only do they not understand the implementation of the legacy system, they also do not clearly understand the business requirements that resulted in that implementation. Meanwhile, the maintenance of the old system had been neglected to such an extent it had fallen behind critical company-wide mandates. This was more of a lesson about why you shouldn’t destaff a project over some petty geographical squabbles, but also quite clearly about why you should always incrementally reimplement software rather that rewriting it.
Even having the entirety of the original dev team there, time takes its toll on recollection of reasoning behind some of the strange decisions made in something that would warrant a rewrite. Much preferable to not having them, of course.
Something I do is if the code looks weird or is rather small for how much work went into it, I leave a comment that says why this was done... just so I can remind myself in 6 months when I go "who the fuck wrote this garbage... oh, me."
#4 is sort of terribly worded, the summary line is something that is important and pretty independent, make sure you're working with expert users of the system... then the explanation brings in a Senior Dev as a good resource to tap. This is the wrong direction, you really want to consult with the system experts to see their rationale for requesting what might seem like odd functionality in the first place.
#4 also mixes a good deal with #5 in that any changes you make (even purely good ones in your view) will require retraining of users and cause a kerfuffle when rolled out to your user base, people _hate_ change.
Keep in mind that you will be one of these people in a few years for whatever you are doing now. The previous people most likely weren't dummies but had to deal with the technology and constraints at the time they built the system in the same way you are doing it now.
No necessarily. It depends on what has lead to the need for a rebuild. Sometimes there weren't previously the resources to "do things properly", Sometimes a feature might only added for a specific client, etc.
You need that previous knowledge to know the "why" of things & if that why is still valid.
IMHO it's more dangerous if you're working with experts who don't want to improve the system.
I’ve carved a career out of rebuilds. I’m working on a rebuild right now. There’s a ton of companies out there who’ve done very well with their home grown antiquated systems from the late 90’s and early 00’s that are now facing stiff competition from young upstarts who had feature parity from day one and are knocking out new features at break neck pace because they’re leveraging the latest and greatest in tools, technology, and thinking.
I’ve always been a big believer in rebuilding your product from the ground up. I think it’s something you should always have going on in the background. Just a couple of devs whose job it is to try and rebuild your thing from scratch. Maybe you’ll never use the new version. But I think it’s a great way to better understand your product and make sure there’s no dark corners that no one dare touch because they don’t understand what it does, how it does it, or why it does it the way it does.
And I’ve always believed that if you don’t want to rebuild your app from scratch, then don’t worry, a competitor will do it for you.
So I agree with every point raised in this article. And I think it does a great job of articulating the issues that often go unspoken. But I’d like to add one more. And for me, this is the biggest issue for any company wanting to rebuild it’s product.
If your sales team has more clout than your designers and developers, then you’re fucked. And in the enterprise software world, this is the norm. An uncheked sales team that get’s whatever it wants has already killed your product and made it impossible to rebuild. Their demands are ad-hoc, nonsensical, and always urgent. So urgent that proper testing and documentation are not valid reasons to prevent a release. Their demands are driven by their sales targets, and the promises they make to clients are born out of ignorance of what what your product does, and how it does it.
This is not true of all companies. Many companies find a reasonable balance between the insatiable demands of a sales force and the weary cautiousness of their engineers. But if your company submits to every wish and whim of your sales team, and you attempt to rebuild your product, then you’re screwed.
It's very hard to get a man to understand something when his salary depends on his not understanding it. By the same principle, as someone who has built a career out of rebuilds, we shouldn't be surprised that you'll recommend this solution for a majority of hypothetical problems. I don't think you are intentionally misleading people, and I'm sure that you want the best for your clients and that you believe that's what you're providing. It's just that, for anyone else reading this thread, please realize that you're getting one side of the story.
Incremental rebuilds are not sexy. Adding unit tests to legacy code (thereby making it not legacy code according to Michael Feathers) is not sexy. Sticking with the tried and true technology is not sexy. But they are typically the most successful approaches for those not compensated for changing things for change's sake.
> I’ve always been a big believer in rebuilding your product from the ground up. I think it’s something you should always have going on in the background. Just a couple of devs whose job it is to try and rebuild your thing from scratch.
Their time is much better spend working on improving the "legacy" codebase. Simple refactoring and splitting the codebase in a modular fashion, mean you can work on limited parts of the system in isolation. This makes incremental improvements and switch to new tech much easier, and certainly less risky than a rewrite.
Well (on a relative scale), won't most startups or smaller companies be more in the phase of "writing" as opposed to "re-writing"? I think the advice above would in theory apply to companies big enough to have legacy codebases.
> If your sales team has more clout than your designers and developers, then you’re fucked. And in the enterprise software world, this is the norm. An uncheked sales team that get’s whatever it wants has already killed your product and made it impossible to rebuild. Their demands are ad-hoc, nonsensical, and always urgent. So urgent that proper testing and documentation are not valid reasons to prevent a release. Their demands are driven by their sales targets, and the promises they make to clients are born out of ignorance of what what your product does, and how it does it.
Well said. This is easily my #1 biggest pain point as a developer.
> I’ve always been a big believer in rebuilding your product from the ground up. I think it’s something you should always have going on in the background. Just a couple of devs whose job it is to try and rebuild your thing from scratch.
I completely agree. Having a pure-research team with a mandate of "all your research must be geared towards totally reimagining the entire product" is a dumb idea. Having more granular (and collaboratively driven) goals from "some of your research must be geared towards totally rewriting an area of our application that is a major pain point, and for which all previous attempts to do incremental changes have failed for technical reasons" to "look into better tools or strategies we could use to tune performance of, or write better tests for, swaths of existing code" is more realistic and more useful.
Obviously, other, not-pure-research devs should be given time to do some of that work as well, otherwise the research team becomes the "saviors that are always about to come back over the hill" for every other team while they kick their respective cans down the road.
That's pretty different to what you were talking about originally. You started by talking about rebuilds and said, have two guys just rebuilding the product. Now you're saying they're doing R&D. Those are very different tasks.
Red Flag #6: Key stake holders keep moving the goal posts.
If your goal moves from feature comparable but on a modern platform, to new features, to a complete reinventing of the product all without actually shipping ... you might be in trouble.
I had a rebuild go 6 months over. In the heated executive meeting at t+3 months I was called to defend my team and pointed out that the VP Product had just delivered “final” specs literally the day before. How could we be on track with development if PM is 3 months past “end of development” with design specifications. The fact that the specs were changing weekly because “we’re agile” is a whole other issue.
People sometimes complain about how developers like to "write the operating system and then a language" when it comes to handling every foreseeable permutation of what the program might every be desired to do, but we're all so used to unstable requirements that sometimes the metaphorical programming language research is the only thing that will be general enough to find a use next week.
I almost had a few similar situations, but after pointing out that being agile doesn't just means changing requirements but also changing time paths or simply different deliveries after each change it got a whole lot clear what agile (and scrum) is good for, and what it's not good for (i.e. agile process but expecting waterfall results doesn't work).
The truth I think is more often that the legacy system is too old and brittle to improve, and customers are demanding ever more complicated features from it.
So you rebuild as a new system as a gamble, because even though it shows all the traits described, the new system is at least one that anyone is willing to develop, and one where features can be added, and to which people can be recruited.
We know big rebuilds have small chances of sucess. But that doesn’t mean you shouldn’t do big rewrites. You are in a bad place if you even
consider. Maybe the big rewrite means the company has an 80% risk of going under. Still could be that safe bet.
This is yet another article where there's a clear managerial-only approach. Sorry, but I dont dig this.
As a developer you're constantly fighting managers who want to rush things to get them out and who will eventually blame you for a bug/non-defined behavior once you hit a certain milestone.
To me it seems the author of the article doesn't understand the tech debt. If you've ever worked in a startup you'd know that the requirements are ever-changing, thus that if a certain payment system is put in place, it might evolve to the point where you really need to refactor it and in order to enable the refactor you have to refactor the whole business flow as well. If there's more than 2-3 features affected by a new feature, a big refactor is definitely needed.
Only one solution offered, which I dont think is adequate because why would I leave something in that was only meant to provide value for short term and then build on top of it till I kill the old system?
Missing the biggest red flag of all, engineers wanting to just play with new toys and pad their CVs. Ask the engineers why they want to rebuild and listen carefully to the answer and if it’s vague handwaving and buzzwords (microservices! Containers! New JS framework!) and no hard numbers to justify it, just say no.
For example “we spend X/year on AWS but if we spend Y to rewrite in C++ we need fewer VMs and can cut that to Z/year” is simple calculations. If your engineers can’t even do that, their motives are suspect.
On the other hand, “we cannot hire anyone to work in COBOL/Perl 5.8/Tcl/other outdated language” is a very real problem. It turns out that 2018, developers are judged for working too long in old technologies even when we know as in industry that a developer can learn a new language.
With this good article I think I have a good question.
The reference to Martin Fowler’s strangler pattern (https://www.martinfowler.com/bliki/StranglerApplication.html) was mentioned in the article to grow the new system in the same codebase until the old system is strangled. In my case (Ionic 1 to 2) however, both the entire framework and the language are different. How should the strangler pattern work in this case?
For webapps you would use a reverse proxy such as nginx or haproxy and replace your application page by page. Then configure the reverse proxy to send all requests to /home to go to the new stack and all other requests go to the old stack. Then flip the switch for every page you finish converting. For backend work, it's similar. You can have an api built in a new stack and it can just have a different endpoint or use a reverse proxy. Backend workers can pick up work from a different queue or you can switch the old job worker off and turn on the new one, and then monitor that everything is working as planned. The really important thing about the strangler pattern is that you need some easy way to turn on bits of functionality while turning off the corresponding old parts. It can be feature flags, it can be routing middleware. You can rip out the guts of the angular routing mechanism and use that to flip the switch.
Seconded. Took part in a moderately big rewrite with this strategy and it worked pretty well.
Identify key components and subsystems and rewrite them one by one. From the outside you seem to be switching over one REST endpoint after the other, but of course internally it's a bit more difficult, but applications often enough have enough parts that are not SO intertwined that you can do stuff like this. It's a bit related to how you break up a monolith. Find bigger, less coupled parts and shave them off and just touch the glue code.
There's no super easy way here. One way to get this done is find independent areas of the app that can be replaced without coupling. Then start building up as you go with the new system. At some point you'll be about 70% through of which you can decide if you want to make the jump and focus your efforts to completely uproot the old one.
Sorry for the abstract reference here, but it applies to almost any replatforming out there. In most cases it is a very expensive operation for a business and needs some major reasons in order to justify such a move.
Instead of a Single Page Application, make it an MPA (multi-page application), each of which is basically a separate SPA. You get latency when swapping between sections of your app, but on some codebases (such as for an internally-used app), that's less of a problem.
We did something similar to this when we broke up our Ember application so that we could code new things in React. We still maintain our Ember codebase, but are rewriting parts of some routes in React, and adding all new things in the React app.
We deploy ours as separate pods in a Kubernetes cluster, but you could even host them on the same server with separate nginx routes.
The initial ramp up of this is a little frustrating, as it seems you're adding extra overhead to everything, the long term goal is to have infrastructure and workflow that supports having part of your app in The Old Proven Thing, and part in The New Hotness. This is valuable whether you're switching to React, or upgrading from Ember 2 to 3, etc, as it lets you upgrade a smaller set of dependencies, and experiment with things.
The company I'm working at is doing this currently. The new product is on the web and the old one is a full client windows program. The biggest hurtle will be to find the balance between largest/smallest pieces which can be transitioned as seamlessly as possible.
This article really captures the risks of a rebuild. I’ve been through a number of them, all but 1 abject failures. The one success was driven by the executive understanding that the company would fail without a rebounds, and it was still 6 months late, resulted in one of the cofounders being fired, an extremely painful rollout, and the company still failed, due to other problems.
My firm belief is that when you need a rebuild, you are already well into a fail state as a company. Not to stay there can be no recovery, but it is an indication of some deep problems for the company, beyond anything the engineering department alone can resolve... and if the rebuild is not coming from the executive leadership, it is an even bigger issue as it will more likely lead to bigger problems than it will solve.
I've become a member of a team the company scrambled to deal with a `legacy` python/SQL - based ingestion/storage system in an effort to 'harden' it. Despite my best efforts, we are going for a full rewrite into java/spring/avro/mongo/es. We have internal users talking SQL and utilising the system at the moment, a fair amount of data is relational.
I have run out of ideas how to convince the team and stakeholders, will have a one-shot chance to talk to VP. Any ideas how to voice the concerns about the full re-design (perhaps I'm just being difficult)?
1. Given the risk, cost and limited upside, the onus is on the refactor team to prove that it needs to be done. Where is the ROI, factor in the risk. Where is this in the stack of things to do? Are there better ROI things?
2. Consider 'what the point' is in the first place, because the entire world could be run on python/SQL and it would be 'hard'. I don't think anyone would consider 'Mongo' to be 'hard' usually people use it because it's fast and easy, not hard. Consider maybe only replacing one part at a time, i.e. Java-SQL.
3. Consider a simple clean up or refactor. No need to learn no languages and tools when maybe you just need a house clean.
4. People seem to be going back to SQL because of it's inherent standardization - so many reporting and analysis systems use SQL as an interface, to the point where even NoSQLs are starting to use SQL.
I'm a big supporter of "replacing one part at a time", and wish I had done that on a rebuild I'm just completing.
In fact, I thought I was. We split our app into 3 parts, rebuilt part 1, then part 2, but part 1 couldn't be released to customers until part 2 was done, and we kept our legacy system supporting the majority of our users until we are done with part 3, which is nearing completion now.
I thought that was "replacing one piece at a time", but it isn't most users aren't touching it until part 3 is done, and at that point, they are experiencing a new system from scratch.
Without knowing the performance requirements and where the current system is failing, it's hard to know if the technology stack will work for your needs—with one exception.
If users speak SQL, they will reject Mongo. The users of the system are the ones who will determine project success or failure.
Think about the data analysts, product owners, etc. who use the system. Interview them. Find out exactly how they use the system currently. Do they query in an ad hoc way? Do they rapidly iterate on their queries? Watch them interact with the system. If it's any way other than through dashboards that an engineer updates on request, you are in for rough seas.
Users must always determine the contours of a new system. There are big data solutions that speak SQL. Some are cloud-based, some are not. Some are faster than others. The team should be able to show you why they rejected those as solutions.
Sometimes you just have to. In one previous company, the "system" we were trying to trash was an unmaintainable VBA CRM homegrown mess which was creating lots of internal issues in the company due to the nature of spreadsheets. It took almost a year to replace but it was 100% worth it.
I'm potentially looking at a situation like this right now at work. We're on a NoSQL DB and it's just not working too well for us anymore, so we would like to transition to something that provides more relational semantics (PostGres, Spanner, something like that). Migrating the backend between one kind of DB and another is non-trivial, especially because the whole ORM needs to be ripped out as well. It's not a full rebuild of the application but it's definitely substantial in effort level.
Sometimes a rebuild is just necessary, because you are on a tech stack that is no longer working for you, for whatever reason. How would you solve that kind of problem?
I'd definitely vote for PostgreSQL, it can handle large loads effortlessly, it's reliable, and yet they keep adding great features.
It could also function pretty much like a nosql db initially, to ease your transition, then you could migrate gradually to using it as a relational db. You need strong checks on data integrity before you start - you could consider double writing (to old orm using nosql + new orm using psql), and comparing data stored to be sure you don't miss anything at first, before you switch?
The first thing you do is refactor with the existing DB, so you have a clear DataStore component. Then you make your shiny Relational DB implementation of that DataStore. Now you run both side by side and for everything you do in the old DB you do the same in the new DB, and you compare the results. At some point you can turn off the old DB with confidence and sleep well knowing that the new DB behaves the way you expect.
Know your burgs and bergs. A “burg” (or burgh) is a fortification, or more usually refers to a city built around (or inside) that fortification. A “berg” is a mountain, or a large hill. Therefore, an iceberg is an “ice mountain”, and a “burgermeister” is a “city master”; i.e. a mayor.
If you fire those people, you remove your source of expertise on the old system. Yes, they did a poor job of maintaining the old system, but their knowledge may be valuable to understanding the old system and creating requirements for the new system to reach parity.
" I would start by firing people that led to this situation."
You are one of those blessed people who can architect a system and the architecture holds up for decades. From my experience most systems will end up in a big mess over time if features get added. There is almost no way around it.
> You are one of those blessed people who can architect a system and the architecture holds up for decades.
This is exactly why maintenance is needed. Proper maintenance that includes things like updating the architecture and gradually migrating the whole system to that architecture, rebuilding small unwieldy components, updating and migrating database schemas as the product evolves, removing unused features.
If a product is just getting bugs patched and nothing else then it isn't really being maintained, it's being deprecated. Unfortunately as an industry we still think that there are distinct build and maintenance phases and that the latter can be done with less resources.
I am not saying to fire everyone. I am just saying that someone needs to be responsible. If you keep the same people in power they will repeat the same mistake. You need to keep domain knowledge but clueless management is just a burden.
That's utopian. The reason cautionary rules like those in TFA exist is because "just undo/revoke all the bad shit that led to your current situation", while certainly appealing in concept, is often impossible in practice. Instead, litmus tests like these, which can be practiced at the dev team level, prove useful. Who knows, if enough dev teams at a company arrive at the same conclusions using reasoning like this, perhaps they really can put enough pressure on management to induce firings of prominent debt-incurring people. Even (and more likely) if not, that understanding at the engineering level will help mitigate future damage/mistakes.
I think it's the phrase used by someone else in the comment chain: "clueless management".
You can have the best developers and architects in the world, but clueless management will sabotage anything they do, whereas good management can accomplish plenty with teams that aren't the best possible.
Hey, a bunch of shell scripts glued to some DOS executables were good enough in my time. We had no fancy schmancy github back in those days, yet we built this business on nothing but hard work and pizza.
Why, the sources of the DOS exes were long gone by second year, lost in the crash of that old Windows Milenium machine that used to sit in our dorm room and was uniquely configured to compile them using Turbo Pascal - we figured it was a safe option to use as a source repository. But that still didn't stop us - we implemented the remaining features by patching assembly.