Ask HN: What's the largest amount of bad code you have ever seen work?

I think I've broken my own record with this one ~2500 lines of incoherent JavaScript/C#. Works though.

428 points | by nobody271 1989 days ago

119 comments

  • oraguy 1989 days ago
    Oracle Database 12.2.

    It is close to 25 million lines of C code.

    What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.

    Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.

    Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.

    The only reason why this product is still surviving and still works is due to literally millions of tests!

    Here is how the life of an Oracle Database developer is:

    - Start working on a new bug.

    - Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.

    - Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.

    - Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.

    - Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.

    - Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.

    - Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.

    - Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.

    - Finally one fine day you would succeed with 0 tests failing.

    - Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.

    - Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.

    - After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.

    The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).

    The fact that this product even works is nothing short of a miracle!

    I don't work for Oracle anymore. Will never work for Oracle again!

    • skrebbel 1987 days ago
      Sounds like ASML, except that Oracle has automated tests.

      (ASML makes machines that make chips. They got something like 90% of the market. Intel, Samsung, TSMC etc are their customers)

      ASML has 1 machine available for testing, maybe 2. These are machines that are about to be shipped, but not totally done being assembled yet, but done enough to run software tests on. This is where changes to their 20 million lines of C code can be tested on. Maybe tonight, you get 15 minutes for your team's work. Then again tomorrow, if you're lucky. Oh but not before the build is done, which takes 8 hours.

      Otherwise pretty much the same story as Oracle.

      Ah no wait. At ASML, when you want to fix a bug, you first describe the bugfix in a Word document. This goes to various risk assessment managers. They assess whether fixing the bug might generate a regression elsewhere. There's no tests, remember, so they do educated guesses whether the bugfix is too risky or not. If they think not, then you get a go to manually apply the fix in 6+ product families. Without automated tests.

      (this is a market leader through sheer technological competence, not through good salespeople like oracle. nobody in the world can make machines that can do what ASML's machines can do. they're also among the hottest tech companies on the dutch stock market. and their software engineering situation is a 1980's horror story times 10. it's quite depressing, really)

      • onion-soup 1983 days ago
        Makes you think of how exactly code_quality correlates with commercial success.
      • ptttr 1975 days ago
        That sounds like a market ripe for disruption - imagine a competitor that follows software engineering best practices.
    • nathan_f77 1988 days ago
      That is absolutely insane. I can't even begin to imagine the complexity of that codebase. I thought my Rails test suite was slow because it takes 4 minutes. If I wrote it in C or C++ it would probably be 10 seconds.

      I can't imagine a C/C++ application where the test suite takes 20-30 hours on a test farm with 100-200 servers. And if you can break 100-1000 tests with a single change, it doesn't like things are very modular and isolated.

      And 30 hours between test runs! I would definitely not take that job. That sounds like hell.

      • Maro 1988 days ago
        It's a good exercise to imagine how the job would be sold. Things like this would definitely not come up in the interview process, instead they would sell you on "you get to work on a cutting-edge db kernel that is running most of the Fortune 100s" or sth like that, which is true (!), but doesn't describe the day to day.

        The best way to guess this is to extrapolate from the interview questions. If they ask you a lot of low-level debugging/macro/etc questions..

        • pavel_lishin 1988 days ago
          > The best way to guess this is to extrapolate from the interview questions.

          Wouldn't you just ask the developers interviewing you outright, "can you walk me through an example of your day? How long does it take you to push out code? What's testing like? Do you run tests locally, or use something like Jenkins?" etc.

          • Endy 1987 days ago
            Most new hires are probably not being interviewed by devs, but either by 3rd-party recruiters or internal recruiters with HR. When I was working in recruiting, the last thing either we or the client wanted was for the new hire to talk to either the person who they were replacing or any of the potential coworkers. Heck, one internal recruiter I had to interface with at a company I choose not to disclose said to me, "can we ask if they read Hacker News? There's some bad vibes about us there."

            Which is when I got back on HN regularly :-)

            (PS I did tell the internal person that there was no way that reading HN was related either to a BFOQ or other job requirement; and thus while it's not illegal, it'd be highly suspicious.)

            • pavel_lishin 1987 days ago
              > When I was working in recruiting, the last thing either we or the client wanted was for the new hire to talk to either the person who they were replacing or any of the potential coworkers.

              What the fuck? Am I a spoiled tech-bro, or does that sound completely insane to anyone else? I would 100% not take a job if I didn't get a chance to talk to my coworkers and future manager during the interview process.

              • eric_h 1987 days ago
                Perhaps you are spoiled (as am I in that regard) but i would absolutely never take a job unless I knew who I was going to be working with and had a chance to ask them honest questions.

                Seems like a trap set up for fresh out of college hires. I don’t know any senior developers who would even consider a job under those circumstances.

        • oraguy 1988 days ago
          On the contrary, the interview was an ordinary one. The screening round consisted of very basic fizzbuzz type coding ability checks: Reversing a linked list, finding duplicates in a list, etc.

          Further rounds of interviews covered data structure problems (trees, hashtables, etc.), design problems, scalability problems, etc. It was just like any other interview for software engineering role.

          • alexeiz 1988 days ago
            "Well, your interviews went quite well. Now the final question: what would you do if you start losing your mind?"
            • CoolGuySteve 1988 days ago
              "I'd like you to write a graph algorithm that traverses the abyss, the cosmic horror that consumes one's mind, that traverses twilight to the rim of morning, that sees the depths of man's fundamental inability to comprehend.

              Oh ya, the markers in here are pretty run down, let me pray to the old ones for some more"

              • breatheoften 1985 days ago
                I have written the algorithm you requested - but I wish I hadn’t run it. I hit ctrl-c when I realized what it was doing but it was too late... The damage is done — we are left with only the consequences and fallout.

                Forgotten dreams like snowflakes melt on hot dusty ground, soon to turn into hard dry mud beneath a bitter polluted sky.

              • doctorless 1988 days ago
                Pretty sure I was asked that question in an Amazon interview.
          • mlthoughts2018 1988 days ago
            Were you even given substantial time to ask the interviewers questions? In most interviews I’ve done, even later round interviews whether it’s a finance company, start-up, FAANG, and companies of all sorts in between, I was given at most 5 minutes to ask questions after some dumb shit whiteboard algo trivia.
            • oraguy 1988 days ago
              I was given 5 minutes to ask questions after each round of interview. That part was ordinary too. That's what most of the other companies do (FAANG or otherwise).
              • sodafountan 1988 days ago
                The real risk is for people who are too young to know what to ask
                • numpty13 1987 days ago
                  I'd hope they wouldn't even consider somebody for this sort of job who's too young to know what to ask.
                  • Latteland 1987 days ago
                    That's kind of naive, of course you want young people who will work hard and maybe not know what they are getting in to. I was offered a job at oracle back in the day, I would have felt a lot of despair if this is what it was.
              • rumpelbums 1987 days ago
                I am not sure what position you were interviewing for and to what level of interview you made it.

                When I was interviewing for an SRE position with Google in Dublin, I had about 10min to ask questions in each of the 5 interviews that were conducted on-site.

                In between the interviews, a sixth SRE would take me to lunch for about an hour. Anything discussed with him wouldn't be evaluated as part of the interview.

                So there was plenty of time for questions, I would say.

      • osrec 1988 days ago
        Hell for the proactive go-getters, but paradise for people who enjoy any excuse for a bit of justifiable down time!

        Q: Are you busy? A: Yes, in the middle of running tests...

        • oraguy 1988 days ago
          That would have been fun but in reality there was no downtime. Developers like me were expected to work on two to three bugs/features at a time and context switch between them.

          If I submit my test jobs today to the farm, the results would come one or two days later, so I work on another bug tomorrow, and submit that. Day after tomorrow, I return to the first bug, and so on.

          • guscost 1987 days ago
            How would you know that merging code from the first bugfix wouldn't break the (just tested) code from the second bugfix?? Would you assume that the first bugfix will be merged first and branch off of that?
            • Blaisorblade0 1987 days ago
              Without knowing Oracle's approach, this sort of problem is no different from any other software, even tho it reaches a larger scale.

              Branch from master, and rerun tests before the final merge, like you should in any other software? (Many processes fail that criterion, see https://bors.tech/ for something that gets this right).

              Ideally you work on a different enough bug that there's limited interaction, and ideally that's figured out before you fix it, but those criteria are indeed harder to satisfy in a bigger software.

              • guscost 1987 days ago
                But if the time needed to test and deploy a change is so ludicrous, it seems like you'd rarely get a big-enough window to rerun your tests before the upstream changes again. Either people are merging in unstable code, or the release lifecycle is a slow byzantine nightmare too (probably the case here).
                • blattimwind 1985 days ago
                  Usually you don't test a single change before merging, but branch from master, merge any number of changes and then run the tests. So the master branch would move forward every 20-30 hours in this case, unless the tests of a merge batch fail, in which case master would kinda stall for a bit.
          • osrec 1988 days ago
            I understand. It was partially a tongue in cheek remark :)
      • knuffced 1988 days ago
        Tests in C/C++ run shockingly fast. I ported an application from Ruby to C++ and the tests ran in well under a second when it was taking 10+ seconds in Ruby. Granted because of C++'s type system there were fewer tests, but it was fast enough that I kept thinking something was wrong.
        • bufferoverflow 1988 days ago
          It's because Ruby is only of the slowest languages out there, and C/C++ is usually #1-#2 on many benchmarks.
        • maksimum 1988 days ago
          Are you including the time to build/link the tests? This is especially true if you have a bunch of dependencies. Last time I worked on C++ tests most of my time was spent on getting the tests to link quickly. Managed to get it from 25 minutes to 1 minute. But I'd rather have spent that time actually writing more test cases, even if they took 10s to run.
        • eric_h 1987 days ago
          Started a new job a few months ago and we’re writing Go - a bunch of the test suites I’ve built run in microseconds. Statically typed compiled languages ftw.
    • AtlasBarfed 1988 days ago
      You've violated the terms of service of Oracle Database by insinuating the codebase quality is in any way not superior to any and all competitors. No benchmarks or comparisons may be performed on the Oracle Database Product under threat of grave bodily harm at the discretion of our very depraved CEO.
      • forthy 1987 days ago
        I doubt the competition (e.g. IBM or Microsoft) has any better code quality. Even PostgreSQL is 1.3M lines of code, so let's get something deliberately written for simplicity. SQLite is just 130k SLoC, so another order of magnitude simpler.

        And yet, even SQLite has an awful amount of test cases.

        https://www.sqlite.org/testing.html

        • pgaddict 1987 days ago
          I'm sure some of the difference (25M vs. 1.3M) can be attributed to code for Oracle features missing in PostgreSQL. But a significant part of it is due to careful development process mercilessly eliminating duplicate and unnecessary code as part of the regular PostgreSQL development cycle.

          It's a bit heartbreaking at first (you spend hours/days/weeks working on something, and then a fellow hacker comes and cuts of the unnecessary pieces), but in the long run I'm grateful we do that.

          • troels 1986 days ago
            > It's a bit heartbreaking at first (you spend hours/days/weeks working on something, and then a fellow hacker comes and cuts of the unnecessary pieces), but in the long run I'm grateful we do that.

            The single hardest thing about programming, I'd say.

            • ska 1974 days ago
              In many (most?) ways the best edits of code are the ones where you can get rid of lines.
        • jeltz 1987 days ago
          PostgreSQL has a lot of code but most parts of the code base have pretty high quality code. The main exceptions are some contrib modules, but those are mostly isolated from the main code base.
        • p0nce 1987 days ago
          It's because software LOC scales linerarly with the amount of man-months spent: a testament to the unique ability of our species to create beautiful, abstract designs that will stand the test of time.
          • Latteland 1987 days ago
            This is an interesting comment, because I can't decide if you are sarcastic or making a deep insightful comment. Because I don't think the statement is true. LOC can go on forever, but it usually happens in things that aren't beautiful and abstract.
            • p0nce 1986 days ago
              I was being sarcastic.
              • Latteland 1984 days ago
                thanks for reply. you said it so earnestly that i couldn't tell!
        • Latteland 1987 days ago
          Worked on sql server for 10+ years. MS SQL Server is way better than that. The sybase sql server code we started with and then rewrote was as bad as oracle.
        • karulont 1987 days ago
          I guess that is just because SQL as a standard is not coherent nor something beatifully designed. SQL is mashup of vendor specific features all bashed togehter into one standard.
          • majewsky 1987 days ago
            There's also a lot of essential complexity there. SQL provides, in essence, a generic interface for entering and analyzing data. Imagine the number of ways to structure and analyze data. Now square that number to get the number of two tests for how two basic features of the language interact with each other. And that's not even near full test coverage.
            • 9question1 1987 days ago
              Your point about essential complexity is absolutely correct, but your faux mathematical analysis is totally not a legit way to analyze the complexity of something or determine test coverage. I feel like as programmers we should be comfortable making sensible statements without making up shady pseudo-math to sound convincing.
              • majewsky 1987 days ago
                It's abundantly clear that I'm not making a precise computation here. My argument is that tests don't scale linearly with the number of features because interactions between features need to be tested as well.
      • njharman 1988 days ago
        Never having to use Oracle Database is a good result.
      • yellowapple 1988 days ago
        I can hear the clamoring of lawyers eager to fundraise for Larry's next flying sailboat.
    • pocketprotector 1988 days ago
      I am a current Oracle employee and blame a lot of the mistakes on the overseas development team in india. They are (not all but enough to matter) terrible programmers, but hey when you can throw 10 Indian programmers at a problem for the cost of one American... You can blame your blated mismanaged code base on their management over there. This is likely do to the attrition and generally less talented and less autonomous engineering style.

      There is a clear difference between code developed AND maintained in the US vs. code that was developed in India, or code developed in USA and given to Indian developers to manage support. Nothing against Indians, but Ive been around the block and there seems to be a lesser quality of code from that part of the world and companies justifyvit in cost savings.

      • Juliate 1987 days ago
        Actually, you can blame this to Oracle top management (especially in a company structured as Oracle is): they called the shots, from day 1.
      • oraguy 1988 days ago
        I have not found this to be true at all. I have seen both US and Indian developers adding good code as well as ugly code to the Oracle Database product.

        The actual damage was done much before I had joined Oracle. It appears that somewhere in the early 2000s, the Oracle codebase went from manageable to sphagetti monster. The changelog showed more changes from US developers than Indian developers at that time. Once the damage was done, all developers whether from the US or India now need to follow this painful process to fix bugs and add features.

    • Izkata 1989 days ago
      A sentiment among members of a former team was that automated tests meant you didn't need to write understandable code - let the tests do the thinking for you.

      This, and stuff like your story, are why I don't trust people who promote test-driven development as the best way to write clean APIs.

      • acroback 1988 days ago
        TDD need to die. This is a curse.

        There should be integration tests along with some property based tests and fuzzy tests. Usually catches a lot of things.Invest in monitoring and alerts too.

        TDD is like relying on debugger to solve your problem. Is debugger a good tool? yes,it is a great tool. But using it as an excuse to avoid understanding what happens under the hood is plain wrong.

        The problem lies in industry where software engineering is not given any value but whoteboarding and solving puzzles is.

        Software engineering is a craft honed over years of making mistakes and learning from them. You want code asap, kick experience engineers get codemonkeys in and get a MVP.

        Quality is not clever algorithm, but clear conscise logic. Code should follow the logic, not the other way around.

        Clear > clever.

        • c3534l 1988 days ago
          And yet tests seem to have made this massive garbage heap actually work and enable a lump of spaghetti to continue to operate as a viable product. It doesn't mean you should write bad code, but it seems like if it can make even the most awful of code viable, then that's a pretty good system. The fact that modern medicine allows the most beat up and desperate to continue to live on isn't an indictment against medicine, it's a testament to it. Don't write bad code, sure. We can all agree to that. Don't prioritize testing? Why? To intentionally sabotage yourself so that you're forced to rewrite it from scratch or go out of business?
          • StreamBright 1987 days ago
            Depends on the definition of viable.
        • acdha 1988 days ago
          I’m sympathetic but this is too strong: what needs to die is dogma. TDD as a way of thinking about the API you’re writing is good but anything will become a problem if you see it as a holy cause rather than a tool which is good only to the extent that it delivers results.
        • christopoulos 1988 days ago
          I very much agree.

          I remember when i realized that TDD shouldn't have such weight in our development as it had gotten (when it was high on the hype curve).

          It was when we starting using a messaging infrastructure that made everything much more reliable and robust, and trough which we could start trusting the infrastructure much more (not 100% though, of course).

          It made me realize that the reason why we did this excessively large amount of tests (1800+) was because the fragile nature of a request/response-based system and we therefore "had to make sure everything worked".

          What I'm trying to get at here is thar TDD assumed the role of a large safety net to a problem we should have addressed in a different manner. After introducing the messaging, we could replay messages that had failed. After this huge turning point tests were only used for what they should have only been used for - ensuring predictable change in core functionality.

          (our code also became easier to understand and more modular, but that's for another time...)

        • edynoid 1987 days ago
          What you allude to there is pretty bad TDD. It was never intended as a replacement for good design, rather as an aid to be clear about design and requirements without writing tons of specs up-front.

          And I agree, that there are lots of anti-patterns that have grown in tandem with TDD, like excessive mocking with dependency injection frameworks or testing renamed identity functions over and over just to get more coverage. However, I'd argue that is equally the fault of object-oriented programming though.

          Where I disagree is this: TDD and unit tests are still a very useful tool. Their big advantage is that you can isolate issues more quickly and precisely, IF you use them correctly.

          For instance, if I have some kind of algorithm in a backend service operating on a data structure that has a bug, I do not want to spend time on the UI layer, network communication or database interactions to figure out, what is going on. Testing at the right scope you get exactly that.

        • kazinator 1988 days ago
          The problem with TDD is that the methodology wants to cover every change, no matter how internal, with some sort of external test.

          Some changes are simply not testable, period.

          No, you cannot always write a test which initially fails, and then passes when the change is made, and when this is the case. You should understand why that is, and not try.

          In some cases when you can, yet still should not. If a whole module is rewritten such that the new version satisfies all of the public contracts with the rest of the code, then only those contracts need to be retested; we don't need new tests targeting internals.

          It's because the old version wasn't targeted by such tests in the first place that it can be rewritten without upheaval.

          • bullshitman 1986 days ago
            Bullshit. Even if you say "period", it doesn't make a bullshit true.
        • ddfx 1988 days ago
          I think TDD is the best way to develop (yet). Obviously tests are code, and if you write crappy highly-coupled tests you will end up with only much more messy code. This is a clear example of bad testing. The greatest advantage of TDD is in design, everything should be modular and easy to unit test, so you could:

          - reproduce bug and verify your bugfix in matter of ms with proper unit test

          - understand what code does

          - change and refactor code whenever you want

          You can tell from what is written that they are not following TDD. Redesign that codebase in an easy and clean to test design would require an exponential effort and time compared to have it done step by step, but it would be worth it

          • astrange 1988 days ago
            A unit test is the least useful kind of test. It requires your design to be "easy to unit test" instead of simple, and if you change something and have to rewrite the test you might miss some logic in both pieces.

            Plus the tests never break on their own because they're modular, and each time you run a test that was obviously going to pass, you've wasted your time.

            As long as you have code coverage, better to have lots of asserts and real-world integration tests.

            • ddfx 1987 days ago
              Integration tests are much slower usually, and you are testing tons of things at the same time. Something breaks (like in that example) and you have no idea of what and why went wrong.

              If you unit test properly you are unit testing the business logic, that you have to properly divide and write in a modular fashion. If you want to test a more complex scenario, just add initial conditions or behaviors. If you can't do that or don't know how to do that, then you don't know what your code is doing or your code is bad designed. And that may be the case we read above.

              Tests rarely break because they help you not breaking the code and functionalities, and they are so fast and efficient on making you realizing that that you don't feel the pain of it.

              I can't imagine any example where "easy to unit test" != simple

              • brootstrap 1987 days ago
                in my work with python, easy to unit tests usually makes things a bit harder. You want functional methods , not mega classes with 100s of class variables , and each class method operates on some portion of those class variables. It makes it impossible to truly isolate functionality and test it. While coding though, it is very easy to make a class and throw god knows what into the class variable space and access those variables whenever... However if we have staticmethods , not reliant on any class , just the arguments provided, and it doesnt modify any class state, the test are great. We can change/refactor our models with confidence knowing the results are all the same.
            • bullshitman 1986 days ago
              Bullshit. I have found many bugs writing unit tests to a legacy code (afterwards). I have cleaned up incredibly messy spaghetti code by writing unit tests (which forced me to clean it up). I have made the code understandable by writing unit tests (both by means of "test as a documentation" and by means of "clean the code so that it is testable, make it modular and made of small units").

              And btw, "easy to unit test" actually leads to "simple". That's the point of TDD.

          • int_19h 1988 days ago
            In my opinion, the only thing that is valuable about unit tests is more appropriately captured in form of function, class and module contracts (as in "design by contract"). Unfortunately very few languages are adopting DbC.

            Functional tests now, that's another matter. But a lot of TDD dogmatism is centered on unit tests specifically. And that results in a lot of code being written that doesn't actually contribute to the product, and that is there solely that you can chop up the product into tiny bits and unit test them separately. Then on the test side you have tons of mocks etc. I've seen several codebases where test code far exceeded the actual product code in complexity - and that's not a healthy state of affairs.

            • lojack 1988 days ago
              In more recent times I've seen some growth in interest around contract testing. Unit tests are immensely more useful when paired with contract tests, but unfortunately without them they tend to be more of a hassle. At its essence integrations are a form of a contract, but those suffer their own problems. In rspec you have 'instance_double' which is a form of a contract test as well, but not really sufficient for proper testing IMO. The current state from what I've seen is a little lackluster, but I wouldn't be surprised to see a growth in contract testing libraries for a variety of languages popping up.
          • iopq 1988 days ago
            I had some tests on my codebase, but eventually only documentation and integration tests remained.

            So let's look at a simplified example.

            https://bitbucket.org/iopq/fizzbuzz-in-rust

            My tests are in the test folder. They are actually superfluous since integration tests test for the same thing.

            I cannot break up the program in a way that would unit test a smaller piece of it in more detail. They only tests I can add would be to test the command line driver

            • antonvs 1987 days ago
              For a single person and their one-person code base, you can certainly get away without unit tests.

              This is especially if your "integration tests" are testing the same component, and not actually integrating with numerous other components being developed by different teams - or, if the system is so small it can run on a single workstation.

              Working in teams on larger systems, the situation is different. Part of the point of unit tests is the "shift left" which allows problems to be discovered early, ideally before code leaves a developer's machine. It reduces the time until bugs are discovered significantly, and reduces the impact of one dev's bugs on other devs on the team.

        • api 1988 days ago
          TDD is yet another in a long line of "methodologies" that don't work. Tests are not a bad thing of course. The problem comes when you turn testing into an ideology and try to use it as a magic fix for all your problems. Same goes for "agile," etc.

          Programming is a craft. Good programmers write good code. Bad programmers write bad code. No methodology will make bad programmers write good code, but bureaucratic bullshit can and will prevent good programmers from working at their best. The only way to improve the output of a bad programmer is to mentor them and let them gain experience.

          • antonvs 1987 days ago
            The reality of working in teams at most companies is that there are going to be mediocre programmers, and even bad programmers, on the team. Many of the practices you refer to as bureaucratic bullshit are actually designed to protect programmers from the mistakes of other programmers.

            Of course, this does require that the process itself has been set up with care, thought, and understanding of what's being achieved.

        • jsgo 1987 days ago
          I'm probably not the best to speak on the topic as I don't use TDD (nor have I), but I think the idea is good if maybe a bit unorthodox: leveraging tests to define examples of inputs/outputs and setting "guards" to make sure the result of your code is as you expected via the tests.

          I'm not keen on the "cult" of it, but if expectations of what the output should look like are available from the onset, it would appear to be of some benefit, at least.

        • danellis 1988 days ago
          What about TDD requires not understanding the code?
        • lojack 1988 days ago
          I'm confused by your comment. Your premise is that TDD should die, and your support is comparing it to a "great tool". Should TDD really die, or should people just stop treating things as a silver bullet? I personally love TDD, it helps me reason about my interfaces and reduces some of the more cumbersome parts of development. I don't expect everyone to use TDD and I don't use it all the time. Similarly I'd never tell someone debuggers should die and they should never use a debugger if thats something that would help them do their job.
          • iopq 1988 days ago
            The thing is, when I spend a lot of time thinking about how to make my program type-safe all of my unit tests become either useless or no-ops

            Integration tests easily survive refactoring, on the other hand

            • lojack 1988 days ago
              Unit tests are a side effect of TDD, they don't have to be the goal. I'd find value out of TDD even if I deleted all of my tests after. It sounds like your problems are around unit tests, and that is neither something required to TDD nor is it something limited to just TDD.

              The problem with integration tests is they are slow and grow exponentially. If they aren't growing exponentially then there's probably large chunks of untested code. Unit tests suffer their own problems, like you said they can be useless because of a reliance on mocking, they can also be brittle and break everywhere with small changes.

              Ultimately any good suite of tests needs some of both. Unit tests to avoid exponential branching of your integration tests, and integration tests to catch errors related to how your units of code interact. I've experienced plenty of bad test suites, many of them are because of poorly written unit tests, but its often the poorly written integration tests that cause problems as well. As with most things, its all about a healthy balance.

              • iopq 1987 days ago
                No, like in some programs when I figure out how to do it correctly the unit tests are either complete tautologies or integration tests.

                Then there are the "write once, never fail ever" tests. Okay, so the test made sense when I wrote the code. I will never touch that part ever again because it works perfectly. Why do I keep running them every time?

                • lojack 1987 days ago
                  If the unit tests are tautologies then they aren't testing the right things, and if they are integration tests then they aren't actually unit tests.

                  I personally run my unit tests every time to confirm my assumptions that the unit of code under test hasn't changed. I also assume all code I write will inevitably be changed in the future because business requirements change and there's always room for improvement. Actually can't think of a single piece of code I've written (apart from code I've thrown out) that didn't eventually need to be rewritten. The benefit of running unit tests is less than the benefit of running integration tests, but the cost of running them is also significantly less. Current project I'm working on has 10x as many unit tests as integration tests and they run 100x faster.

                  My workflow is usually run my unit tests for the code I'm working on constantly, and when I think things are working run the entire test suite to verify everything works well together. Thats my workflow whether or not I'm doing TDD.

                  • iopq 1985 days ago
                    The code that determines truths about the data never had to be rewritten.

                    Like, are the two points neighbors? I mean, I'm not going to write a version of this function for a spherical board in the future. Nobody plays on a spherical board.

                    It's also a really boring unit test. Yes, (1,1) and (1,2) are neighbors. Do I really need to test this function until the end of time?

                    • lojack 1982 days ago
                      Thats exactly the type of code that should be unit tested. The unit tests are trivially easy to write, and a very naive solution is easy to code up. The tests should take up a negligible overhead in your overall test suite runtime. Then when it comes time to optimize the code because its becoming a bottleneck you can be confident that the more obscure solution is correct.
            • UK-Al05 1987 days ago
              TDD should only drive the public interface of your "module", if your testing your internals your doing it wrong. It will hinder refactoring rather than help.
      • pmarreck 1989 days ago
        TDD doesn't think for you, it merely validates your own existing understanding/mental model and forces you to come up with it upfront. This is hardly a thing to be mistrustful about, unless you work with idiots.
        • mannykannot 1988 days ago
          You are right about that, but having code that passes a given test suite doesn't say anything about its secondary qualities, such as whether it can be understood. In theory, a failing test could improve your understanding of the situation, allowing you to refactor your initial pass at a solution, but I would bet that on this particular code base, the comprehension-raising doesn't go far enough, in most cases, for this to be feasible.
          • Huggernaut 1987 days ago
            That seems orthogonal to testing though. Implementation code can be hard to understand with or without a test suite, at least with test, as you point out, you may be able to understand the behaviour at some higher abstraction.
        • bryanrasmussen 1988 days ago
          ok but from reading a lot of the comments on HN it sounds like many posters here think that they do work with idiots.
          • YeGoblynQueenne 1988 days ago
            Those idiots probably also think the same, though.
            • setr 1988 days ago
              If everyone’s an idiot tbinking they’re surrounded by idiots, then TDD has no hope to ever succeed !
          • pmarreck 1988 days ago
            Touché!
        • danmaz74 1988 days ago
          > TDD doesn't think for you

          I totally agree, but I met several programmers who think the opposite.

      • sooheon 1989 days ago
        Rich Hickey called it guard rail driven programming. You'll never drive where you want to go if you just get on the highway and bump off the guard rails.
        • carlmr 1988 days ago
          Except that's a really bad analogy. It's more like you set up guard rails, and every time your vehicle hits a guard rail you change the algorithm it uses for navigation until it can do a whole run without hitting a guard rail.

          I've experienced myself how the code quality of proper TDD code can be amazing. However it needs someone to still actually care about what they're doing. So it doesn't help with idiots.

          • mannykannot 1988 days ago
            It is not be a good analogy for TDD as properly practiced, but it seems to be very fitting for the situation described at the top of this thread, and that is far from being a unique case.
          • sooheon 1988 days ago
            I don't think it's a generous analogy, but it's poking fun at being test DRIVEN, rather than driver driven. I think he'd agree with you that it's the thinking and navigating and "actually caring about what they're doing" that matters. Tests are a tool to aid that. Tests don't sit in the driver's seat.
            • mikekchar 1988 days ago
              Yeah. To me "test driven" really means that I write code under the constraints that it has to make writing my tests sensible and easy. This turns out to improve design in a large number of cases. There are lots of other constraints you can use that tend to improve design as well (method size, parameter list size, number of object attributes, etc are other well known ones). But "test driven" is a nice catch phrase.
          • corobo 1988 days ago
            > Except that's a really bad analogy. It's more like

            The response to every analogy ever made on the internet. Can we stop using them yet?

            • carlmr 1987 days ago
              Spot on: "Analogies: Analogies are good tools for explaining a concept to someone for the first time. But because analogies are imperfect they are the worst way to persuade. All discussions that involve analogies devolve into arguments about the quality of the analogy, not the underlying situation." - Scott Adams, creator of Dilbert (I know he's quite controversial since the election, but he's on point here) in https://blog.dilbert.com/2016/12/21/how-to-be-unpersuasive/
              • DonHopkins 1986 days ago
                Scott Adams has been quite controversial long before the elections, ever since he got busted as a sock puppet "plannedchaos," posing as his own biggest fan, praising himself as a certified genius, and calling people who disagreed with him idiots, etc. Not to mention his mid to late '90s blog posts about women.

                But at least he wasn't using analogies, huh?

                http://comicsalliance.com/scott-adams-plannedchaos-sockpuppe...

                >Dilbert creator Scott Adams came to our attention last month for the first time since the mid to late '90s when a blog post surfaced where he said, among other things, that women are "treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It's just easier this way for everyone."

                >Now, he's managed to provoke yet another internet maelstorm of derision by popping up on message boards to harangue his critics and defend himself. That's not news in and of itself, but what really makes it special is how he's doing it: by leaving comments on Metafilter and Reddit under the pseudonym PlannedChaos where he speaks about himself in the third person and attacks his critics while pretending that he is not Scott Adams, but rather just a big, big fan of the cartoonist.

                http://comicsalliance.com/scott-adam-sexist-mens-rights/

                >Dilbert's creator Scott Adams Compares Women Asking for Equal Pay to Children Demanding Candy

                Hmm, that sounds an awful lot like another analogy to me, actually... Oops!

                So maybe Scott Adams isn't the most un-hypocritical guy to quote about the problems of analogies.

        • specialist 1988 days ago
          Nice imagery. I like it.

          The other commenters made me think of the kids game Operation. https://en.wikipedia.org/wiki/Operation_(game)

          How about shock collar programming? Or electric fence programming? Or block stacking (Jenga) programming.

          Good times.

      • wutbrodo 1988 days ago
        Yup, in static vs dynamic conversations, I invariably see someone dismiss the value of compiler enforcement by claiming that you should be writing unit tests to cover these cases anyway. Every time I say a silent prayer that I never end up working with the person I'm talking to haha.
      • posedge 1988 days ago
        I don't see how this is an argument against TDD. Apparently a whole slew of things went wrong in this project but that doesn't imply that testing is the cause of them.
      • dleskov 1987 days ago
        TDD only works in conjunction with thorough peer reviews. Case in point: at my place of work, code and tests written by an intern can go through literally dozens of iterations before the check-in gets authorized, and even the senior engineers are not exempt from peer reviews (recent interns are especially eager to volunteer).
      • YeGoblynQueenne 1988 days ago
        The problem with TDD is that a dedicated developer can always make a test pass- the how is another matter.
        • celvro 1988 days ago
          when(mockObject.getVar()).thenReturn(1); assertEquals(1, mockObject.getVar());

          test passes boss!

    • Oren-T 1988 days ago
      Now pick almost any other category-leading software product and you will find a similar situation.

      The category-leading product is probably from one of the earliest companies in the field, if not the first. They have the oldest and cruftiest code - and the manpower to somehow keep it working. It is definitely not the fastest and definitely not the most stable. But they do have the resources to make sure it supports all the third party integrations and features important for the big customers.

      I have encountered exactly this same situation on several different fields and categories.

      At at time when I was a complete open source fanatic in the early 2000s it suddenly made me realize how Microsoft actually had much better quality software than most big proprietary software vendors.

    • Dash83 1987 days ago
      Sweet-merciful Jesus. You just made me experience Vietnam-style flashbacks. I worked at Oracle for the 12.1 and 12.2 releases (not working there anymore). You just described my day to day tenure at Oracle. Thank god that's done.
    • gjmacd 1988 days ago
      You described the early part of my career in software to a T.

      I worked for a mini-computer company in the 1980's that ported Oracle (I'm thinking the version stamp was 22.1 when I was there from 1986-1990). It was one GIANT mess of standard "C" with makefiles that were in some ways larger and more complex than some of the actual kernel code it was building!

      Took 24 hours to build the product... lol

    • jmarchello 1988 days ago
      > The only reason why this product is still surviving and still works is due to literally millions of tests!

      Lesson learned. Always write tests. Your business will depend on it.

      • maltalex 1988 days ago
        One one hand, sure. They're still able to ship a working product despite having an abysmal code base. That's an excellent end result that must not be underestimated. Perhaps the problem that code base solves is really that difficult and there's no other way.

        But on the other hand, over-reliance on tests is one of the reasons they ended up in this situation in the first place. It's like the car safety engineer's joke - How do you make cars safer? Install a knife in the middle of the steering wheel pointed at the driver.

        When we're too sure that some safety feature will save us, we forget to be careful.

    • grappler 1984 days ago
      As I read this, I am vacationing in Hawaii for the first time. I can look out my window right now and see the island of Lanai. And that's what I'm doing as I'm read your post right now.

      Reading a few sentences, looking out at Lanai. Reading a few more sentences, and looking back at Lanai...

    • swarnie_ 1989 days ago
      As someone who codes an ERP app built on 12.2 this comment resonated with me in ways you can't begin to imagine.
    • austenallred 1988 days ago
      The notion of tests that take 20 hours blows my mind.

      The notions of tests written in C that take 20 hours I can't even fathom.

      • rubber_duck 1988 days ago
        I'm going to guess a lot of these are integration tests, not unit tests (simply going off execution time).

        At that point, for DB testing, I doubt it matters what language test are written in, it's going to be mostly about setting up and tearing down the environment over and over.

    • satanic_pope 1989 days ago
      Good lord, just reading it can cause panic attack.
      • christophilus 1988 days ago
        It literally gave me a sinking feeling. I’d quit that job on day 0.
        • springfeld 1986 days ago
          But you would have to wait for day 1+ to realize
      • jasonMickl 1987 days ago
        If you have fear of Panic Attacks then your problem is not Oracle , Software or anything but a psychiatric one.
    • gabriel34 1989 days ago
      Really surprising considering that Oracle is the standard for serious enterprise databases. Not really surprising when you consider Oracle's other bug ridden offerings (probably not as thoroughly tested). Makes me fear for Oracle 18c.
      • cntlzw 1989 days ago
        Not surprising at all. There code might be not performance, maintainable or good looking by developer standards but as OP said they have a gazillion of test cases that make sure oracle db runs and doesn’t produce weird outcomes.
      • _0nac 1988 days ago
        Totally unsurprising if you've ever worked with Oracle. The layers upon layers of legacy cruft are plainly visible even in simple things like the GUI installers.
        • C1sc0cat 1988 days ago
          I remember an oracle forms product based product I helped develop to install on end users pc's required several oracle products installing - which meant 14 or 15 Floppy disks to be used in the right order.
        • avisser 1988 days ago
          The fact that the first version shipped in 1979 has to contribute to this as well.

          The field of software engineering has matured a lot since then.

          • philjohn 1988 days ago
            I mean, PostgreSQL can trace its roots back to 1982's INGRES ... and UNIX started in 1969.

            There are quite a few very old projects that don't have the same level of cruft as Oracle; it epitomises a Sales Division driven culture.

            How many of those switches (that now need to be supported and tested) are because some functionality was promised to a large contract, and so it just had to be done? I would wager a good number.

    • eismcc 1989 days ago
      At least there are tests!
      • gnulinux 1989 days ago
        Tests that run for 30 hours is an indication that nobody bothered writing unittests. If you need to run all tests after changing X, it means X is NOT tested. Instead you need to rely on integrations tests catching Xs behavior.
        • danmaz74 1988 days ago
          I beg to differ. Having to run the full test suite to catch significant errors is an indication that the software design isn't (very) modular, but it has nothing to do with unit tests. Unit tests do not replace service/integration/end to end tests, they only complement them - see the "test pyramid".

          I think it's important to point this out, because one of the biggest mistakes I'm seeing developers do these days is relying too much on unit tests (especially on "behavior" tests using mocks) and not trying to catch problems at a higher level using higher level tests. Then the code gets deployed and - surprise surprise - all kinds of unforeseen errors come out.

          Unit tests are useful, but they are, by definition, very limited in scope.

          • TickleSteve 1988 days ago
            (terminology nazi mode)

            "... is an indication that the software design isn't (very) decoupled ".

            You can be modular without being properly decoupled from the other modules.

            • danmaz74 1988 days ago
              Hmmm... you have a point, but then, shouldn't it be "decoupled into modules"?
            • mdcb 1988 days ago
              But then are your modules really modular?
              • avisser 1988 days ago
                In a C/C++ world, a module is usually defined as a file/.dll/.so on disk. So highly-coupled modules are still modules.
        • TickleSteve 1988 days ago
          Always test "outside-in", i.e. integration tests first, then if you can afford it, unit tests.

          Integration tests test those things you're going to get paid for... features & use-cases.

          Having a huge library of unit tests freezes your design and hampers your ability to modify in the future.

          • mercer 1988 days ago
            While I see some value in the red-green unit testing approach, I've found the drawbacks to often eclipse the advantages, especially under real-world time constraints.

            In my day to day programming, when I neglect writing tests, the regret is always about those that are on the side of integration testing. I'm okay with not testing individual functions or individual modules even. But the stuff that 'integrates' various modules/concerns is almost always worth writing tests for.

          • mooreds 1988 days ago
            Love the idea of this.

            In my experience it's far easier to introduce testing by focusing on unit testing complicated, stateless business logic. The setup is less complex, the feedback cycle is quick, and the value is apparent ("oh gosh, now I understand all these edge cases and can change this complicated code with more confidence"). I think it also leads to better code at the class/module/function level.

            In my experience once a test (of any kind) saves a developer from a regression, they become far more amenable to writing more tests.

            That said I think starting with integration tests might be a good area of growth for me.

            • TickleSteve 1988 days ago
              In general, I test those things that relate to the application, not those about the implementation.

              i.e. Test business logic edge-cases, don't test a linked-list implementation... that's just locking your design in.

              • int_19h 1988 days ago
                Writing functional tests is easy when you have a clear spec. If you do, tests are basically the expression of that spec in code. Conversely, if they're hard to write, that means that your spec is underspecified or otherwise deficient (and then, as a developer, ideally, you go bug the product manager to fix that).
              • mooreds 1988 days ago
                Right, like I said "complicated business logic". I agree completely, I have no desire to test a library or framework (unless it's proven to need it).
          • macca321 1988 days ago
            For those who want to know more about this approach: see Ian Cooper - TDD, Where Did It All Go Wrong https://www.youtube.com/watch?v=EZ05e7EMOLM

            I post this all the time, it's like free upvotes :)

        • adrianN 1988 days ago
          Integration tests are pretty important in huge codebases with complex interactions. Unit tests are of course useful to shorten the dev cycle, but you need to design your software to be amenable to unit testing. Bolting them onto a legacy codebase can be really hard.
        • theshrike79 1988 days ago
          In large systems you can unit test your code within an inch of its life and it can still fail in integration tests.
          • kaon 1986 days ago
            Exactly. I have a product that spans multiple devices and multiple architectures: micro controllers, SDKs and drivers running on PCs, third-party devices with firmware on them and FPGA code. They all evolve at their own pace, and there’s a big explosion of possible combinations in the field.

            We ended up emulating the hardware, run all the software on the emulated hardware, and deploy integration tests to a thousand nodes on AWS for a few minutes it takes to test each combination. Tests finish quickly and it has been a while since we shipped something with a bug in it.

            But there’s a catch: we have to unit test the test infrastructure against real hardware - I believe it’d be called test validation. Thus all the individual emulators and cosimulation setups have to have equivalent physical test benches, automated so that no human interaction is needed to compare emulated output to real output. In more than a few cases, we need cycle-accurate behavior.

            The test harness unit (validation ) test has to, for example, spin up a logic analyzer and a few MSO oscilloscopes, reinitialize the test bench – e.g. load the 3rd party firmware we test against, then get it all synchronized and run the validation scenarios. Oh, and the firmware of the instrumentation is also a part of the setup: we found bugs in T&M equipment firmware/software that would break our tests. We regression test that stuff, too.

            All in all, a full test suite, run sequentially, takes about 40,000 hours, and that’s when being very careful about orthogonalizing the tests so that there’s a good balance between integration aspects and unit aspects.

            I totally dig why Oracle has to do something like this, but on the other hand, we have a code base that’s not very brittle, but the integration aspects make it mostly impossible to reason about what could possibly break - so we either test, or get the support people overwhelmed with angry calls. Had we had brittle code on top of it, we’d have been doomed.

        • Tharkun 1988 days ago
          If you're only writing tests at the unit level, you might as well not bother. And it's always good to run all tests after any change, it's entirely too easy for the developer to have an incomplete understanding of the implications of a change. Or for another developer to misuse other functionality, deliberately or otherwise.
        • semicolon_storm 1988 days ago
          It could also be that the flags are so tangled together that a change to one part of the system can break many other parts that are completely unrelated. Sure you can run a unit test for X, but what about Y? Retesting everything is all you can do when everything is so tangled you can’t predict what a change could effect.
        • oraguy 1988 days ago
          > Tests that run for 30 hours is an indication that nobody bothered writing unittests.

          Yes, they were not unit tests. There was no culture of unit tests in the Oracle Database development team. A few people called it "unit tests" but they either said it loosely or they were mistaken.

          Unit test would not have been effective because every area of the code was deeply entangled with everything else. They did have the concept of layered code (like a virtual operting system layer at the bottom, a memory management layer on top of that, a querying engine on top of that, and so on) but over the years, people violated layers and wrote code that called an upper layer from lower layer leading to a big spaghetti mess. A change in one module could cause a very unrelated module to fail in mysterious ways.

          Every test was almost always an integration test. Every test case restarted the database, connected to the database, created tables in it, inserted test data into it, ran queries, and compared the results to ensure that the observed results match the expected results. They tried to exercise every function and every branch condition in this manner with different test cases. The code coverage was remarkable though. Some areas of the code had more than 95% test coverage while some other areas had 80% or so coverage. But the work was not fun.

          • danpalmer 1988 days ago
            I'm amazed that it had that many tests that took that long, but ONLY had 80-95% coverage. I understand the product is huge, but that's crazy.
            • contender_x 1988 days ago
              I do. It's about state coverage: every Boolean flag doubles the possible state of that bit of code: now you need to run everything twice to retain the coverage.

              FWIW, I know people who work on SQL processing (big data Hive/Spark, not RDMBS), and a recurrent issue is that an optimisation which benefits most people turns out to be pathologically bad for some queries for some users. Usually those with multiple tables with 8192 columns and some join which takes 4h at the best of times, now takes 6h and so the overnight reports aren't ready in time. And locks held in the process are now blocking some other app which really matters to the businesses existence. These are trouble because they still "work" in the pure 'same outputs as before', it's just the side effects can be be so disruptive.

            • JaRail 1988 days ago
              Writing tests for error handling can be a pain. You write your product code to be as robust as possible but it isn't always clear how to trigger the error conditions you can detect. This is especially true with integration tests.
        • hprobotic 1988 days ago
          How about XS Max =]]
    • aleksanderhan 1988 days ago
      Sounds like a way an reinforcement learning algorithm could write code.
    • api 1988 days ago
      Code like this makes me think of the famous line from the film version of Hellraiser: "I have such sights to show you..."

      Contrast PostgreSQL or... uhh... virtually any other database. Oracle's mess is clearly a result of bad management, not a reflection of the intrinsic difficulty of the problem domain.

      • ternaryoperator 1987 days ago
        Nonsense. The problem domain you dismiss is hideously complicated. Oracle DB and PostgreSQL are entirely different classes of products. No airline runs its reservation system on PostreSQL. That's not a coincidence.
        • meshugga 1987 days ago
          It's not a coincidence, no, because Oracle can provide support guarantees in a way a Postgres contractor can not.

          This is also a factor for independent developers (who build airline reservation systems) who need to choose a RDBMS for their product - they'll choose oracle, because ... Oracle can provide support guarantees in a way a Postgres contractor can not.

          Which makes Oracle not a different class of product than Postgres, but a different class of support for the product. (which could be considered part of the product, so ... maybe you're right)

        • TheHaakon 1979 days ago
          No, they use Amadeus. Amadeus is a wonderful mainframe program that perfectly and with 100% accuracy faithfully models how you'd book a train ticket in France in the fifties.

          What more could we want?

        • p0nce 1987 days ago
          > The problem domain you dismiss is hideously complicated.

          https://medium.freecodecamp.org/the-biggest-codebases-in-his...

          Size of software reflects the number of people working on it (and for how long), not essential complexity.

        • rv77ax 1987 days ago
          Have you heard the term "marketing"?
        • mdavid626 1987 days ago
          This is interesting, could you elaborate?
      • castathrowaway 1988 days ago
        The KDB+ database has been around for 20 years.

        The executable is ~ 500kb.

        Enterprise software is gross.

        • majewsky 1987 days ago
          > The executable is ~ 500kb.

          So... is this good or bad? A Hello World in Go is on the order of 2 MB. That doesn't say anything about code bloat, it just says that Go prefers static over dynamic linking.

    • Kip9000 1987 days ago
      Perhaps the most valuable part of this whole thing are the tests. Perhaps with that test bank, one could start from scratch to write a new database.
    • nicoburns 1988 days ago
      I think this validates my view that testing is important, but keeping the codebase clean, readable and modular is more important!
      • humanrebar 1988 days ago
        Why not both?
        • frogpelt 1988 days ago
          Yes, both. But maybe one is more important?
          • mrkarim 1988 days ago
            I'd argue the tests is more important for example Oracle is still the leading commercial DB, If your product works people will buy it.
            • Johnny555 1987 days ago
              Isn't that mostly due to momentum? But if Oracle can't keep up with the features customers need, they'll lose that momentum.

              Enterprise software sales cycles are slow, but once they start turning, it's hard to turn them back.

    • Uhrheber 1987 days ago
    • nspattak 1986 days ago
      Really interesting post. I have often worked in such a mess of a code even though in orders of magnitude smaller code bases with only a few developers. I would never imagine a project like oracle is like this. Since there seem to be a number of oracle employees around, I would be very interested to know if there have been any propositions to start cleaning up this shit. The man hours wasted from this workflow is so huge that I expect that even a small percent of oracle's developers could be assigned to rewrite it and they would catch up the rest of the product in a reasonable time frame so that in a few years it could be rewritten and stop wasting developers' time.
    • sebslomski 1989 days ago
      Fingers crossed that there are no merge conflicts & conflicting tests.
      • kamaal 1988 days ago
        Everyone's on the same 2 month like schedules, so I guess that won't be much of a problem.
    • dwisehart 1987 days ago
      So I think the interesting question is: when you run into a large, bloated, unwieldy POS codebase, how do you fix it? You obviously need some buy-in from management, but you also need a plan that doesn't start with "stop what we are doing and get everyone to spend all of their time rewriting what we have" or "hire a ton of new devs."

      I have seen smaller versions of what the OP describes. My plan was that every new piece of code that was checked in had to meet certain guidelines for clarity--like can the dev sitting next to you understand it with little to no introduction--and particularly knotty pieces of existing code deserved a bug report just to rewrite and untangle the knots.

      In the end, whatever your plan, I think what you need is a cultural change, and cultures are notoriously difficult to change. Any cultural change is going to have to start high up in the organization with the realization that the codebase is an unsustainable POS.

    • chocks 1987 days ago
      I had a very similar experience in another enterprise storage company with a code base of ~6M loc of C/C++ and gazillion test cases. Originally, it used to take roughly about an hour to just build the system where it did a bunch of compile time checks, linting etc. Then if everything goes well, it goes to code review, then to a set of hourly and nightly integration checks before it gets merged to the main branch. It would take another cool 3-4 months of QA regression cycle before it gets to the final build.
    • lousken 1986 days ago
      now that's a candidate for a Rust rewrite
    • thumb 1987 days ago
      This sounds a lot like Walmart's codebase for their POS registers. Except, the kicker is that there are zero unit tests, zero test frameworks, etc. You just have to run it through the shitty IBM debugger and hope that you don't step on anyone else's work. Up until 2016, they didn't even have a place to store documentation. Each register has ~1000 flags, some of which can be bit switched further into testing hell.
    • zunairminhas 1987 days ago
      The structure must be a high couple and low coherence. It should have been designed with high coherence. More components/modules should be designed with the ability to get further divided into smaller modules/components with the growth of the requirements. Bug fixing in smaller components is much easier than solving in the overall project.

      Not sure if Oracles is already following this or not. But this is necessary for scalable projects.

    • coldcode 1988 days ago
      Sounds like working on batch apps on mainframes in the 70's, one compile/run cycle a day, next morning get a 1 foot high printed stack of core dump.
    • srkigo 1988 days ago
      How can it have millions of tests with 25 million lines of code? How many lines of code is there including the code in the tests?
      • eigenspace 1988 days ago
        You can have automated test generation. I'd imagine with a database system, you'd have a big list of possible ingredients to a query and then go thorugh a series of nested for loops to concatenate them together and make sure each permutation works separately. That can easily make for thousands of tests with only a few lines of code.
      • pkroll 1988 days ago
        Along with what eigenspace said, check out SQLite's Testing page: https://www.sqlite.org/testing.html (the project has 711 times as much test code and scripts, as code). You can go really far... and still miss things on occasion.
      • oraguy 1988 days ago
        The 25 million lines of code is only the source code of Oracle Database written in C.

        The test cases are written in a domain specific language named OraTst which is developed and maintained only within Oracle. OraTst is not available outside Oracle. The OraTst DSL looks like a mixture of special syntax to restart database, compare results of queries, change data configuration, etc. and embedded SQL queries to populate database and retrieve results.

        I don't know how many more millions of lines of code the tests add. Assuming every test takes about 25 lines of code on an average (every test used to consume about half of my screen to a full screen), we can estimate that the tests themselves consume close to another additional 25 million lines of code to 50 million lines of code.

    • royaltm 1986 days ago
      I wonder what version control system do they use for that monstrosity (how long it takes to checkout/commit changes)?
    • HauntedMidget 1988 days ago
      Thank you. I had a really rough day caused by the project I inherited. Doesn't seem so bad in comparison now.
    • cholmon 1987 days ago
      Wow. What is the average salary for an Oracle developer doing this sort of work?
    • dynofu 1982 days ago
      it reminds my early days in oracle, most of the time was spent on debugging what's wrong with the testcase.
    • bk718 1983 days ago
      This is why Java is better.
    • Jagarta 1987 days ago
      What happens if any of the tests are wrong? (got bugs themselves)
      • BadHash 1987 days ago
        not much difference, it’s a feature now!
    • ivoribeiro 1988 days ago
      I love my job
    • dlandi 1987 days ago
      #JobSecurity
    • xttblog 1984 days ago
      大公司也不过如此!www.xttblog.com
      • musicgood 1983 days ago
        编程人就是苦命啊,面试造火箭 进去拧螺丝 还tm是锈了三十年的破螺丝
        • liutao_wb 1983 days ago
          老哥在哪拧螺丝,带带我
    • jasonMickl 1987 days ago
      Where is the code?

      Is this pure B.S. or a true story. Do you have it or is it one of your B.S. stories?

      Show us the beef!

      those of us who have worked WITH the database know it is extremely modular. THAT is why it has survived decades and will keep surviving for years to come.

      So Mr B.S. artist, show it or shut up !

  • dejv 1988 days ago
    I am maintaining one application in construction industry space. That application was created 25 years ago by construction worker that never wrote single line of code before, but because he caused a lot of problems on construction site they give him Programming 101 book and let him build it.

    15 years later the app was close to half milion lines long of huge bowl of spaghetti code. Only comments in whole codebase were timestamps. I don't know why he dated his code, but I find it fascinating: he never deleted basically anything so you can find different timeframes of when he discovered various concepts. There is use-exception-instead-of-if period, there is time when he discovered design patterns, there is time before he learnt SQL so all the database queries was done by iterating over whole tables and such. I am sure I will find commented Hello world somewhere in the code someday.

    I am working on this codebase for 10 years. Code quality improved and major issues get fixed, but there is not enough budget to actually rewrote whole system, so after all it is more or less huge spaghetti monster and I get used to it.

    • ThePhysicist 1988 days ago
      I have to salute this construction worker for building a solution that is apparently so valuable for the business that they can’t simply replace or rewrite it. This probably means that it solves a real problem for them, and adding 30.000 lines of code per year without any formal training or much tooling is no small feat either. I understand the criticisms and laughs here from the “real” software developers, but damn it’s just impressive what people can create on their own given enough time and motivation.
      • dejv 1988 days ago
        It is impressive indeed. The coding started just in time when Windows 95 were released. There was no Stackoverflow and they don't even really have internet back then. The programmer (as far as I know) didn't even speak English so he has access to book or two in German language and code snipets in help section of Delphi. At the same time creating applications with UI just started, so there was very little experience available, espcially in rural Austria.

        Company did tried to migrate to other software few times, but the software is just too specific for given industry and legislation of small country that the companies who tried to create similar software usually went bankrupt soon.

        • meshugga 1987 days ago
          Rural Austria? Please provide a company name ... or at least first and last character :)
      • wasm_future 1988 days ago
        Somewhere out there, there's a software developer who was assigned the task of building the team's office using 30,000 bricks, making all kind of spaghetti patches to prevent it from falling over, and the construction workers are laughing about it on a construction worker forum.
        • quickthrower2 1987 days ago
          And this is the wall where he discovered you can use cement to bind the bricks. And here he even mixes that cement with sand and water. This is a safer place to stand.
      • mvindahl 1988 days ago
        This.

        A spaghetti monster which solves a real business problem can be improved, chunked into pieces, gradually rewritten, whatever improves maintenance. If need be, there will be funds and time for doing so.

        By contract, an impeccably architectured, layered, no-design-patterns-omitted, product which solves no business problem .. oh, the horror.

    • christophilus 1988 days ago
      One of my first jobs in the industry was really similar. I ended up sitting down with a friend and rewriting it in C#. We didn’t have permission, but no one knew until our codebase was already in working shape (a month or so). We got away with it because the original codebase was so bad that it hadn’t shipped in 5 years. Months of 0 productivity were normal. My friend and I went on to rewrite the entire suite of products over the course of a year or two. We then started our own business. The rewrites were the most successful products that company had ever had in the modern era. Rewriting is not always a bad idea, and it can be less expensive in the medium run. Few seem to realize this, thanks to Joel Spolsky’s blog post on the matter being seen as dogma.
      • kaon 1986 days ago
        We had a bunch of code at work that everyone sort of begrudgingly used that I wrote almost 20 years ago, not knowing too much about what I was doing. I have recently rewritten it – about 100kLOC of messy C++ turned into 25kLOC of bliss. The test coverage we had ensured that we didn’t have to worry about anything breaking. I hate myself a bit less :)
    • mabbo 1988 days ago
      I feel like you could make a huge chart on the wall showing the epochs and what the previous developer didn't know at that time. Like "why would it be done this way? Ah 1997, let's see on the chart... Ah right, Greg doesn't know SQL here"
      • zengineer 1988 days ago
        Haha that would be amazing!
    • bryanrasmussen 1988 days ago
      what kind of problems do you cause at a construction site that do not get you fired but reassigned to Programmer with programming 101 book?
      • dejv 1988 days ago
        Probably fall from roof few times or was not really handy with hammer or something.

        The company is quite fascinating, they started around 1945 and during the years they've became small conglomerate. There are three or even four generations workign together and once they like you as a person they will find something for you to do.

        • mooreds 1988 days ago
          Is the original programmer still at the company in any capacity?
          • dejv 1988 days ago
            No, he left before I was hired as a consultant. I had never get chance to talk to him in any form. I am the single person who ever touched the source code for past 10 years. I don't know why he left or where he went.
      • smilesnd 1988 days ago
        From my experience this means they had a strong union.
      • adrianN 1988 days ago
        Perhaps he played too much Minecraft and tried building a Turing machine from the materials on the construction site?
        • alorimer 1988 days ago
          C can't be that different from redstone right?
    • zhte415 1988 days ago
      This sounds quite awesome, terrible, and he sounds like he was jumping into a very deep end. Quite sad that he seemingly didn't have someone to tutor/mentor him moving to this role. Given that it worked, and there was a learning curve as you described, over a decade, he seems to have had some big amount of determination to get things to work.
    • imAsking9836 1988 days ago
      Like reading the journal you found in the abandoned house you just moved in and it belonged to the previous kid that used to live there. Sounds like a movie.
      • dejv 1988 days ago
        Oh yeah it might be commedy where two people argue about spaces vs tabs and there come our function of 20k lines in single block of code that never heard about using any of those... nor about moving repeated code to own function.
    • C1sc0cat 1988 days ago
      I recall a civil engineering suite of programs that had been converted from Basic into Fortran IV.

      The basic was so old it only had two (yes two) character variables - the Fortran code made liberal uses of Arithmetic IF staements !!!!

      An example of one is IF (S12 - 1.0) 13, 13, 12

    • deepaksurti 1988 days ago
      >> there is not enough budget to actually rewrote whole system,

      As pointed out elsewhere, this is definitely solving a real problem, the longevity of the app is the proof.

      Instead of rewriting, can you replace it with newer idioms? A MVP/PoC for a newer way of solving a problem (AR may be here) that the software solves with some tangible gains, the latter is more important, can lead to approval of a mini budget for that MVP and who knows what that can lead to.

      • dejv 1988 days ago
        Well, maybe. Biggest problem is using database from 80s, which is used in a way that sometimes it acts like database, sometimes it is used by copying files (one file per table) around random directories with custom locking mechanism.

        App consist of maybe 20 different codebases that generate around 30 executables, kind of randomly, fetching source files from different codebases as programmer find a fit + random "fixes" of system modules/component make it all very very hard to do much groundbreaking work.

    • umichguy 1988 days ago
      That's fascinating and gave me a genuine laugh.
    • melenaos 1988 days ago
      I think you won! I had similar experience with my first professional job, a cad cam that is solving building structures for antisysmic regulations. Every developed had his own lib, duplicated code with different bugs on each lib, no comments, every screen was build from copy paste code of other screen and no tests at all. Undercovered bugs was out there for many years without any way of knowing and special sleep functions producing intentionally slow code.
    • arwhatever 1988 days ago
      "lava flows" anti-pattern
    • theflork 1988 days ago
      What is he up to now?
      • dejv 1988 days ago
        I don't know. The company is based in rural Austria and so it took company quite some time to find another engineer that will took on this project (me). I have never met him or have any other contact with him.
    • deepsun 1988 days ago
      Well, if you're working on a codebase for 10 years, then "no budget" is not really an excuse, sorry. As a responsible engineer, you should have either convinced management to spend some of your time on refactoring main parts, or cleaned it up yourself bit-by-bit every time you touch something. 10 man-years should be enough for a program that was created in 10 years by a single rookie dev.
      • dejv 1988 days ago
        Sure if this was my job, then I would do it but I am just contractor with set amount of hours devoted to the project. First few years was spend fighting fires as the company needed this very specific software to function, currently it is just about keeping eyes on having it run and occasional fix some report or update data pipelines.

        I would say that rewrite would cost about 2 millions of euros. Which is really big price tag for company that use this system as a backoffice tool.

        • TheHaakon 1979 days ago
          Some company have back offices that they've spent considerably more on. Airline companies for instance may have a tool that lets the person at the gate check who that person is, what their deal is, etc. And then GDPR happened and the bill to ensure that every rule is followed to the letter and suddenly 2M€ isn't that bad after all...
  • wcarss 1989 days ago
    Years ago as an intern at Microsoft, I had code go into the Excel, PowerPoint, Word, Outlook, and shared Office code.

    Excel is an incomprehensible maze of #defines and macros, PowerPoint is a Golden Temple of overly-object-oriented insanity, and Word is just so old and brittle you'd expect it to turn to dust by committing. There are "don't touch this!"-like messages left near the main loop _since roughly 1990_.

    I had a bug in some IME code causing a popup to draw behind a window occasionally, and a Windows guru had to come in to figure it out by remote-debugging the windows draw code with no symbols.

    I learned there that people can make enormous and powerful castles from, well, shit.

    • PaulHoule 1989 days ago
      "Don't touch this" around the main loop can mean being able to make promises about responsiveness, reliability, etc.

      Frequently there are critical code sections where it is much easier to tell people "don't touch it" rather than training people how to work on it safely.

      • mannykannot 1989 days ago
        When that is the case, would it not also be a really good place to explain why not, or provide a link to the place where such an explanation is provided?
        • thepp1983 1989 days ago
          In reality there often isn't time.

          Getting it done > Getting it done properly as far a management is concerned.

          • Ace17 1988 days ago
            Ever had this discussion with a coworker?

            Coworker: "I hadn't enough time to do it right"

            You: "Given enough time, how would you do it differently?"

            Coworker: "............" (crickets)

            IMHO it's not related to deadlines only ; the "not enough time" argument is often a comfortable fallacy, keeping us from facing the limits of our current skills.

            I found it to be especially true with testing. I've lost the count of how many times I heard "we didn't had time to write (more) tests". But testing is hard. And when given enough time, these developers don't magically start doing it "right" overnight.

            • thepp1983 1988 days ago
              Knowing when something isn't right is easier to knowing how to do it right. So I would be wary as to saying it is a lack of skill by your co-workers.

              e.g.

              I had to write a bespoke popup window launcher for a large gambling company in the UK. The games were mainly the awful slots games that you see in motorway service stations. These are basically one arm bandits on steroids.

              There is a lot of logic that was in JavaScript that should have been in C# and I had to design it correctly to work with a third party Proprietary CMS system and I had to manage session tokens on 3 to 4 third party systems. Not easy.

              It took me about 2 weeks of just reading the code and absorbing it, drawing lots of diagrams of how data flowed through the system and then porting that logic over to C# in a way that would work with the CMS system in a logical and OOP fashion and handling auth tokens effectively.

            • carlmr 1988 days ago
              >I found it to be especially true with testing. I've lost the count of how many times I heard "we didn't had time to write (more) tests". But testing is hard. And when given enough time, these developers don't magically start doing it "right" overnight.

              Bingo. It's not necessarily only skills though. It can be myriad reasons and "no time" is just the easiest excuse they can think of. In big companies I've often seen the company process prescribing such bad tools that a good TDD testing strategy is impossible to do with those tools, but they won't move away from them, because somebody in purchasing already bought 10,000 licenses for this bad tool (which is often just a bad GUI, which doesn't really help, except for selling the thing).

              The worst tool was a bad GUI where you couldn't even define functions you want to use, that had slow (>1h), non-deterministic, test execution, for a unit test.

              • thepp1983 1988 days ago
                To even write unit tests effectively you need to write your code in a certain differently.

                In C# this normally means using IOC + DI.

                Also almost nobody I know does proper TDD. I know it is very convincing when one of the TDD evangelists shows you how to write something like a method to work out the nth number in a fibonacci sequence using nothing but just writing tests.

                In reality most 95% of developers that even write tests write the logic and write the test afterwards to check the logic does what it should.

                • carlmr 1987 days ago
                  >To even write unit tests effectively you need to write your code in a certain differently.

                  >In C# this normally means using IOC + DI.

                  I've become quite partial to functional programming in the last few years. Side effect free functions with functions as interfaces for DI lend themselves perfectly to TDD and data parallel async without worrying too much.

                  C# is now slowly taking over most of the good features from F#, but I think the culture won't transform so easily.

                • hprotagonist 1988 days ago
                  and let’s not forget the ever popular “bug-driven testing”.
            • de_watcher 1988 days ago
              I've quickly realized that there is never enough time.

              If anyone feels that there is enough time then (depending on the position dev/manager/client/etc) he starts slacking, moving focus, moving deadlines, moving staff, demanding more features/support/documentation or new requirements analysis, start being pissed more about smaller bugs.

            • SomeCallMeTim 1988 days ago
              Unlike your coworker, I _always_ have a plan. Often a dozen of them. With various pros and cons for each.

              But also unlike your coworker, I probably _figured out how to do it right_ in the time given. It's pretty rare that I don't have time to do it right; it does happen (especially with extreme instances of scope creep and requirements drift), but it's rare.

              Which I guess is your point? The time excuse is just an excuse, and a good developer writes good code.

            • PaulHoule 1988 days ago
              The one that gets me is when you've designed something as simple as possible and then to make it "simpler" people insist on making it less general and paradoxically more complex in a small way.

              Related to that is the "obvious performance fix" that doesn't perform faster that keeps burning up time for years long after it was proven to not be faster because freshers never found out about it and the oldsters forgot.

          • technix32 1989 days ago
            This is a complete fallacy IMO. The time is 10 fold down the line when people are attempting to reverse engineer in order to maintain it.
            • jjeaff 1989 days ago
              Presumably, down the road, you'll either be a defunct company or doing well enough to afford 10 times the manpower to fix things.

              Facebook was a spaghetti code mess in the beginning. I'm sure it caused them some growing pains, but moving too slowly early on would have likely been more costly.

              • flukus 1988 days ago
                > Presumably, down the road, you'll either be a defunct company or doing well enough to afford 10 times the manpower to fix things.

                Only in startup land which is still a small fraction of our industry.

                Most places will never have 10 times the manpower to fix things and are hurting themselves by not doing them properly in the first place.

                > Facebook was a spaghetti code mess in the beginning. I'm sure it caused them some growing pains, but moving too slowly early on would have likely been more costly.

                Survivor-ship bias, for every facebook how many potentially viable companies never got off the ground because users couldn't tolerate using their steaming pile?

                • SatvikBeri 1988 days ago
                  Notably, MySpace is often cited as a company that failed because their codebase was terrible, which prevented them from adding features as quickly as Facebook could (despite having many more engineers at the time.)
                  • thepp1983 1988 days ago
                    I don't believe that for one second considering the hacks that Facebook has had to do around the limitations of PHP.
              • Kaveren 1988 days ago
                I'm completely opposed to the view that bad craftsmanship is acceptable because of time constraints. You are paying for it very dearly, very soon. It is of the utmost importance to write the best code you can from the beginning, and I don't believe it slows you down very much, if at all.

                If you've ever seen a software product where something that should take a weekend takes months to get out, it's often not because the problem is more complicated than you'd think, but because of a mangled, complex codebase which prevents anyone from getting real work done.

                Edit: Removed a bunch of redundancy.

                • thepp1983 1988 days ago
                  Well I am sorry but you are deluded.

                  In almost any industry. You normally have 3 properties you can choose.

                  1. Fast 2. Cheap 3. Quality

                  You can only pick two of those e.g. The Apollo Space program chose 1 & 3, however it was insanely expensive but the US beat the Russians.

              • lazerwalker 1988 days ago
                Both you and this comment's parent are correct. Which doesn't say anything about the nature of software engineering or project management, but more the importance of taking context into account when considering advice.
            • dc_gregory 1989 days ago
              In this case, the evidence is that excel/word etc are doing fairly well...
            • thepp1983 1988 days ago
              LOL.

              The long as short of it as a contractor I have to get it done. It will be probably me making the changes later and I make sure I put these things called comments in.

              Also developers pretending code quality is an either or proposition is a false dichotomy. You can write 80% of it in a correct manner and the other 20% could be just hacks to get it done in time. You can't write the perfect system.

              So I am sorry you are the one being fallacious.

            • mixmastamyk 1988 days ago
              Try telling that to a pointy-haired boss.
        • lozenge 1988 days ago
          To be fair, if you are implementing a feature or fix in Word and think you need to edit the main loop - 99% chance you are wrong and the fix would be better placed elsewhere. And 99% chance that an edit will cause regressions or changes in behaviour elsewhere.
        • humbleMouse 1989 days ago
          It is important to point out that most computer systems are running non deterministic operating systems.

          For example, code running in JVMs on top of non deterministic operating systems sometimes behaves in really odd ways. Sometimes a main loop is stable for reasons nobody understands.

      • rawoke083600 1988 days ago
        Yup ! Code is the how.... Comments are the Why
    • swebs 1988 days ago
    • siruncledrew 1988 days ago
      Maybe in the year 2050 some future intern/employee will be adding to the Office code and wonder about the "Don't touch this" code relics of the past that someone from long ago left for future generations.
      • maxxxxx 1988 days ago
        I bet in 2050 people will puzzle over the microservice-cloud-javascript-Go-caching legacy systems that were developed in 2018 and be scared...
        • Endy 1988 days ago
          I mean, in 2018, I'm scared witless of all that. Anything cached in background to someone else's computer by something as unstable and slow as ECMAScript is just... a very bad idea. Keep programs local, and don't use Web coding / scripting for anything but the Web itself through a free-standing browser.
    • smcameron 1988 days ago
      1990 you say.... maybe the programmer was just an MC Hammer fan.
    • exikyut 1987 days ago
      Waited a bit for this question to pop up, but it didn't, to my surprise. So:

      > ...remote-debugging the windows draw code with no symbols

      Why, specifically, were no symbols available? I can't come up with an explanation. Surely old symbols are kept. Do checked builds take longer to iterate on (ie build), or something?

      • wcarss 1987 days ago
        That's a great question! I didn't mean to imply that we absolutely couldn't have used symbols -- the answer is just because he was able to figure it out without them and it was less effort to try without first.

        Office and Windows are different teams and units, so one dev on one team typically wouldn't have access to all of the symbol info for the codebase of the other. Setting that up takes some hoop-jumping, so he tried without and ended up figuring things out just fine over a few hours.

        What I wanted to demonstrate was that in that moment all he had available was shit, and he still managed to push the castle higher.

        • exikyut 1986 days ago
          Wow, I see.

          I must admit, I do very much wonder what kind of environment Microsoft would be if teams were less segregated. I found http://blog.zorinaq.com/i-contribute-to-the-windows-kernel-w... in the comments, which seems to hint at the same sort of theme somewhat - particularly the bit about contributing to teams other than your own. There's a strong notion of isolation.

          This is just thinking out loud, a response is not required. Everywhere has pros and cons. I'm (even with all this moping) actually less hesitant about MS as a whole than the rest of FAANG (except for N, which I also don't see a problem with) - not because of the whole "new MS" thing, or GH, but because everyone else seems to have fewer scruples than I consider to be a viable baseline. So there's that. :)

          It's just kind of sad to see these kinds of inefficiencies, and it would be cool to eliminate them. Of course, it'd unleash organizational chaos for a while, but of course it would be totally worth it.

    • kaybe 1987 days ago
      Can you guess what happened here:

      https://news.ycombinator.com/item?id=15745250 ?

      It appeared that a bug in an office component was fixed with manually binary editing. Is that probable?

    • java-man 1989 days ago
      and this tradition proudly continues with Windows 10!
  • boyter 1988 days ago
    The worst program I ever worked on was something I was asked to maintain once. It consisted of two parts. The first was a web application writen in ASP. The second portion was essentially Microsoft Reporting Services implemented in 80,000 lines of VB.NET.

    The first thing I did was chuck it into VS2010 and run some code metrics on it. The results were, 10 or so Methods had 2000+ lines of code. The maintainability index was 0 (number between 0 and 100 where 0 is unmaintainable). The worst function had a cyclomatic complexity of 2700 (the worst I have ever seen on a function before was 750 odd). It was full of nested in-line dynamic SQL all of which referred to tables with 100+ columns, which had helpful names like sdf_324. There were about 5000 stored procedures of which most were 90% similar to other ones with a similar naming scheme. There were no foreign key constraints in the database. Every query including updates, inserts and deletes used NOLOCK (so no data integrity). It all lived in a single 80,000 line file, which crashed VS every time you tried to do a simple edit.

    I essentially told my boss I would quit over it as there was no way I could support it without other aspects of work suffering. Thankfully it was put in the too hard basket and nobody else had to endure my pain. I ended up peer reviewing the changes the guy made some time later and a single column update touched in the order of 500 lines of code.

    There was one interesting thing I found with it however, there was so much repeated/nested if code in methods you could hold down page down and it would look like the page was moving the other way, similar to how a wheel on TV looks like its spinning the other way.

    • specialist 1988 days ago
      When I do code necromancy, instead of understanding the code, I try to understand what it does, how it interacts with the world, and then recreate that behavior.

      Meaning I capture all the I/O and recreate it. SQL, HTML, PDF, CSV, whatever. Serialize everything, before and after. And then do diffs on the outputs to see if new code reproduces expected behavior.

      Much easier than dead code removal, code deduping, incremental changes, backfilling tests, etc.

      Once someone captures, documents what the code is actually doing, the real refactoring begins. Removing unnecessary queries. Grooming data. Simplifying schemas. Aligning the app with the biz. Etc.

      • hprotagonist 1988 days ago
        This is often the only sane thing to do.

        It’s my default approach with thorny scientific code: get everyone to agree on what output should be for a bunch of relevant inputs, get the original systems output, hash it, and then write a bunch of tests in a new project that all assert that these inputs produce outputs whose hashes are as follows..

        then never look at the guts of the horror again.

    • abc_lisper 1988 days ago
      > spinning the other way

      Hahhahah. That is the funniest shit I have read in a long time!

  • rollulus 1988 days ago
    At my first gig I teamed up with a guy responsible for a gigantic monolith written in Lua. Originally, the project started as a little script running in Nginx. Over the course of several years, it organically grew to epic proportions, by consuming and replacing every piece of software that it interfaced with - including Nginx.

    There were two ingredients in the recipe for disaster. The first is that Lua comes "batteries excluded": the standard library is minimalist and the community and set of available packages out there is small. That's typically not an issue, as long as one uses Lua in the intended way: small scripts that extend existing programs with custom user logic (e.g. Nginx, Vim, World of Warcraft). The second is that Lua is a dynamic language: it's dynamically typed, and practically everything can be overridden, monkey patched and hacked, down to the fundamental iterators that allow you to traverse data structures.

    This was the playground for the guy to create his own reality.

    Lacking a serious standard library, he crafted his own. Where a normal world e.g. file rename function would either do the job or return a error to the caller, he chose a different approach. Functions were autonomous highly intelligent pieces of code that tried to resolve every possible problem, entangled with external logic, so grokking the behaviour of the most fundamental things was challenging - let alone understanding fragments of code composed of library calls.

    Lacking a OO model in Lua, he built his own. I can spend a lot of time describing with what was wrong with it, but it suffices to say that each object had SIX different 'self' or 'this' pointers, each with slightly different semantics. And highly entangled with external unrelated logic of course.

    I'll save the stories about the scheduler and time series database he built for another time.

    • humanrebar 1988 days ago
      I've seen personal reality building happen in Lua several times already. It's very seductive to intelligent solo artists who are given a lot of freedom.

      To be fair, I've also seen it happen in C, C++, and JavaScript.

      • julianz 1987 days ago
        "personal reality building" is the greatest quote in this whole thread, it's perfect. Will be using.
      • rollulus 1987 days ago
        That's spot on, the guy is indeed one of the most intelligent, knowledgeable and dedicated persons I've ever met. Actually a really good guy.
      • pageandrew 1988 days ago
        Ruby as well. I went down that path once. I learned my lesson.
    • mercer 1988 days ago
      I'm a masochist and would very much like to hear about the scheduler and time series database!
    • abledon 1988 days ago
      “This was the playground for the guy to create his own reality.”

      Queue the Neil Gaiman sandman styled comic about his dark adventure

    • italomaia_b 1983 days ago
      Well, OO libraries for lua were not popular nor standardized until not long ago, so, the default suggestion for it was actually to create your own OO lib, which many projectes ended up doing (mostly in the same way), which led to some trouble.

      For replacing nginx with lua, that is curious. Openresty is not being used? No web framework? (lot of lua dead web frameworks in the road, by the way)

      Also, "creating your own reality" is usually a bad thing in any language. That usually happens when you're developing something alone, for long and didn't give maintenance to other peoples code much in the past.

      In another related point: lua is NOT supposed to be used as glue/extension code. It was designed so that it would be easy to do so. It is that "you can" more than "you should".

      Lua doesn't come with batteries included, by design and doesn't have a plethora of libraries available or (coff coff) easy to pick from, but, they do exist and they do solve most problems. Nonethless, truth be told, most of the good api for Lua one sees nowadays, was not available 1 or 2 years ago or not mature enough.

      To conclude, lack of type hinting in Lua or optional static typing can create problems for bigger problems if good design, testing and documentation is not enforced from day 1. Most scripting languages suffer from this. You guys could try "ravi" to get this (almost) for free.

    • james_s_tayler 1988 days ago
      This is gold.

      Part 2 please???

    • packetpirate 1988 days ago
      We've got a Robert Heinlein fan here...
    • italomaia_b 1983 days ago
      Also, give us part 2 =D
  • hprotagonist 1989 days ago
    Behold, academia.

    The only maintainers of this code, ever, have been grad students and postdocs. I estimate there have been about 12-15 generations worth. This code has supported hundreds of publications in its lifespan.

    A codebase that began life in 1987, in C. First ported to matlab in 1999. First source control was added (as SVN) in 2015. Between 2015 and 2018, there were 6 commits total, yet 3 people graduated out of the lab from it. Probably 100,000 loc total, of which I estimate maybe a third is ever used. 1400-line matlab functions are normal-ish. I've found loops nested 11 levels deep.

    It's a series of psychophysical experiments. Each experiment exists in at least 4 different versions side by side in source, each named slightly different, often by incorrect datestamp of last modification. Version control across machines is not well maintained, so you have to diff everything before you can copy or move files lest you accidentally blow something away completely.

    Oh, and it's mexed and wrapped for use on a mac on exactly one snow leopard machine, hardware from 2007.

    edit: I think this counts as a job, not a student experience, because I am not a student. I just have to clean this mess up once in a while.

    • fao_ 1989 days ago
      Yeah. At this point I think teaching source control and abstraction should be part of the "the scientific method" part of the course
    • mlboss 1989 days ago
      I think it is a problem in general with code for experiments. You just need to change a tiny bit for new experiment and you don’t want to ruin earlier experiment.
      • humbledrone 1989 days ago
        There's a very simple solution: guard your tiny bit of code with e.g. a command line flag that defaults to "off", and commit that.

        It's a little more work sometimes, especially when your experiment changes things structurally, but it pays off over and over.

        • alexbecker 1988 days ago
          Isn't this what git branches are for?
          • nl 1988 days ago
            No!

            There's a great article somewhere about how the normal version control flow doesn't really work for this style of computing.

            You want to keep both "versions" of code live and active in the same place at the same time (often in the same notebook).

            People end up with methods named methodName, methodName2 etc, which isn't very good. But once you see the workflow you understand why normal version control doesn't work either.

            There should be a solution to this, but AFAIK there isn't yet.

          • humbledrone 1988 days ago
            No, branches break down badly when you have many experiments. You end up with a bunch of incompatible versions of the system that you need to merge together later which can be a huge mess (depending on the size of the changes).

            By all means, branches are great for super-prototypey early code, but once you know that you want to keep the ability to run the experiment around, guard it with a flag and merge it into mainline to avoid nightmare merges later!

  • madmax108 1989 days ago
    A customer-facing dashboard. Yes, a dashboard. How bad can a dashboard be bad you ask? Well, for one, the dashboard had tabs, and each tab was a separate webapp hosted on a separate server. And each team was responsible for developing and maintaining the webapp that their team was incharge of (i.e the User team in charge of Users webapp, Feature1 team incharge of Feature1 webapp).

    Now add on the fact that different teams had varying levels of frontend competence. This led to some webapps being in (badly written) React, some in Angular, some in JQuery and one in Angular2 as well. Some were java API backed, some were NodeJS backed and one used Ruby in the backend. Oh yeah, and each had different datastores as well.

    Now alongside this, there's no central auth framework, so each webapp had their own way of how to determine user auth (there was a shared cookie thankfully on the *.company.com domain), so there were 6-7 possible login and logout pages as well.

    When the company had a brand redesign and needed all customer facing stuff to re-align with new design, we had to literally re-design 6 dashboards which have used different paradigms and different tech stacks.

    To this date, the dashboard still exists and is used by customers (Main dashboard for a company valued at $500M+) and the most common issues are issues related to Auth (eg. random logouts when switching tabs), data inconsistency (They have crons which update data from one DB to another, but it's not immediate) and an inconsistent design and UI behaviour (since JS is also different for each app) which pisses off many users.

    Till date, I'm not sure who signed off on this pointless dashboard design.

    • afraca 1988 days ago
      To some extent this sounds like Spotify. I have been told it's basically Iframes stitched together, and each frame is owned by a team [0] , I do think Spotify has some better auth communication though.

      0: https://www.quora.com/How-is-JavaScript-used-within-the-Spot...

      • lowry 1988 days ago
        Aha! This is the Spotify model!
    • verelo 1988 days ago
      I've run into exactly this issue at a past job, multiple webapps run by different teams that were "tabs", with a central auth system. I turned up, saw this and couldn't believe my eyes. Basically because teams couldn't agree on working together, they made this horrid attempt of a service based architecture and the result was chaos. I know monoliths are not popular, but they have their place in early stage companies where moving fast v's having a trendy design is critical. Company went out of business a few years later...
    • ible 1988 days ago
      Talk about shipping the org chart.
    • CoolGuySteve 1988 days ago
      Sounds eerily familiar to something we had at the investment bank I worked at. I'm guessing this is in finance and these apps are from different departments/desks?
      • lapnitnelav 1988 days ago
        Teams not collaborating is the giveaway.
    • ahi 1988 days ago
      Microservices!
    • quickthrower2 1987 days ago
      Sounds like many resumes were polished at that company.
    • awestroke 1987 days ago
      Sounds like how they do it at Spotify
  • gnulinux 1989 days ago
    We have absolutely no idea how to write code. I always wonder if it's like this for other branches of engineering too? I wonder if engineers who designed my elevator or airplane had "ok it's very surprising that it's working, let's not touch this" moments. Or chemical engineers synthesize medicines in way nobody but a rockstar guru understands but everyone changes all the time. I wonder if my cellphone is made by machines designed in early 1990s because nobody was able to figure out what that one cog is doing.

    Software is a mess. I've seen some freakishly smart people capable of solving very hard problems writing code that literally changes the world at this very moment. But the code itself is, well, a castle of shit. Why? Is it because our tools (programming languages, compilers etc) are still stone age technology? Is it because software is inherently a harder problem than say machines or chemical processes for the human brain? Is it because software engineers are less educated than other engineers? .....?

    • pavlov 1988 days ago
      It's like that in other fields of engineering too, when they are making something they haven't built before. That's the essential part: for example, a lot of construction is really just rebuilding the same thing that's already been built 100,000 times in the past 100 years.

      When they attempt to build something new, it often ends up like software – tremendous overruns in both cost and schedule.

      C.f. the only new nuclear power plant being built in the West, which is $4 billion over budget and ten years late: https://en.wikipedia.org/wiki/Olkiluoto_Nuclear_Power_Plant#...

      The reason why this feels more commonplace in software is that we're usually designing something new. Software has essentially no reproduction costs, so there's no reason for anybody to design software that's a carbon copy of something you can already download or buy off-the-shelf. That's not the case in engineering of physical products or works. New buildings are needed all the time, even if they're performing exactly the same function as the building in the adjacent lot.

      • augustl 1988 days ago
        > It's like that in other fields of engineering too, when they are making something they haven't built before.

        Exactly this. I like to use an analogy of building bridges :)

          - OK, we have this valley and we want to drive cars over it. What do we 
            do?
          
          - Hmm, we could just make the road along the bottom of the valley
          
          - Wouldn't that cause the cars to fall because of the steep angle?
          
          - Good point. Maybe if we built the road with a lot of twist and turns
          
          - Then it's a slow and long drive.
          
          - Maybe we could build some sort of catapult setup to throw the cars
            across the valley
          
          - That would make safety a concern though. Is there a way we could use
            helicopter rotors to suspend a road in mid air over the valley?
          
          - Or what if we attach the road to each side of the valley and make a 
            road that's strong enough to not crack in the middle under its own weight?
          
          - Yeah, that sounds like a good idea. How do we make sure it can hold its
            own weight, though?
          
          - .....etc
        • majewsky 1987 days ago
          > Maybe we could build some sort of catapult setup to throw the cars across the valley

          This was played for laughs by German satire news website "Der Postillon" (like The Onion, but in German): "Department of Traffic to replace ramshackle bridges by jumping hills"

          Nicely photoshopped picture at: https://www.der-postillon.com/2014/11/lander-ersetzen-marode...

        • exikyut 1987 days ago
          Very interesting point.

          I must admit that when I saw valley in

          > OK, we have this valley

          I immediately thought about SV, and then as I read

          > - Maybe we could build some sort of catapult setup to throw the cars > * across the valley*

          I imagined some kind of scenario happening in an alternate reality where a startup was trying to dream up a viable way to actually achieve this in SV.

          It was funny because of the fact that a lot of the ideas people do come up with are probably ideologically very similar in magnitude and ridiculous impossibility.

      • brohee 1988 days ago
        It's not the only one being built in the west, EDF is having tremendous difficulty delivering the other ones. Flamanville ( https://en.wikipedia.org/wiki/Flamanville_Nuclear_Power_Plan...) is also very late, and given that I'm not quite sure why the UK ordered one too (https://en.wikipedia.org/wiki/Hinkley_Point_C_nuclear_power_...).
      • C1sc0cat 1988 days ago
        Or you find that the ground can't support the bridge and it collapses - If your lucky you find out before that happens and redesign the bridge.
      • lozenge 1988 days ago
        Most software, including on this thread, is not truly new, or at least can be built from known, reliable tools and patterns.
    • twblalock 1989 days ago
      It's because software that solves the problem the business intends to solve, and can be maintained without unreasonable time and effort, is good enough. Code readability and maintainability problems are not business problems unless they impact developer productivity to the extent that feature development becomes too slow to tolerate.

      After all, if work inside the codebase does not make a difference to the people who use the software, in terms of reliability or features, is it really worth doing?

      The trick is to find a balance where efforts to improve code quality actually improve outcomes for the business and the users -- otherwise it's not justifiable to take time away from feature development, which is what the users actually care about.

      • rabidrat 1987 days ago
        not all software is solving a "business problem". sometimes it's academic, or hobby, or government. but always the code is shit.
    • b_t_s 1989 days ago
      I think it's because software is inherently easier and less critical...castles of shit actually work. Software is fairly unique in that level of quality and reliability needed to have significant economic benefit is very low, and the consequences of system failure are rarely severe. That's not the case in most branches of engineering. The complexity levels are also off the charts, in large part because economic value is so loosely coupled with quality. In many cases more turds are more profitable.
    • enjoiful 1989 days ago
      The difference is, an elevator is a precisely defined problem and never needs to have its functionality improved and changed.

      In fact, you could build the exact elevator almost identically. Software is never like that — it’s dynamic and constantly changing throughout the life cycle of the software.

      • Invictus0 1989 days ago
        > an elevator is a precisely defined problem

        Ha! As a mechanical engineer, no, nothing in reality is ever precisely defined. Consider that every single part in the elevator has a tolerance: none of the parts are exactly the same in every elevator. Did you account for thermal expansion? What about wear? Fatigue?

        > Software is never like that — it’s dynamic and constantly changing throughout the life cycle of the software.

        But elevators need to be shipped right the first time and can't make mistakes; with software, it almost never works perfectly no matter how much time passes.

        • gnulinux 1989 days ago
          > with software, it almost never works perfectly no matter how much time passes.

          There is a person inside me -- whenever this is said -- that wants to shout "No! You can prove correctness of your program!". But this complicates the issue even more, since afaik no elevator's correctness is proven but it just works. Mechanical stuff somehow just magically work without proof whereas it's still debatable if proven software works (did we prove the software that proved it?). I don't know what's the difference. Maybe elevator has a well-defined structure, it goes up and down, opens the door etc, even though implementation can change (as you explained). But maybe software isn't like that. Idk.

          • dmurray 1988 days ago
            Mechanical things allow a surprising amount of modularity, compared to software, and therefore make it easy to get test coverage.

            E.g. your cable is rated for 10,000 kg and 6-month inspections, your driveshaft can go 5 million revolutions or 9 months before it needs to be greased...test all those things separately, put them together and you have a stateless system that if it works today will work tomorrow.

          • Invictus0 1988 days ago
            > Mechanical stuff somehow just magically work without proof whereas it's still debatable if proven software works (did we prove the software that proved it?).

            It might seem like magic, but, of course, there is actually a science to it. Depending on the situation, a part can work if a certain length is 10.000 or if it's 10.001. In software, that is never the case: a value that should be 10 but is actualy 10.001 can stop everything. In engineering, there is a limited amount of leeway at every step of the process, and everything is slightly overengineered by some factor of safety to ensure that this is the case. For example, if an elevator cable is rated to hold 10,000lbs, the elevator will be sold as having a maximum capacity of 8,000lbs. In correct usage, (weight less than 8,000lbs), there is a sufficient factor of safety on the cable so that it won't break even if the cable is cracked, worn, etc.

            • pure-awesome 1988 days ago
              I mean, we have that kind of thing in software too, when it comes to e.g. memory usage or runtime (in a sense, anywhere we start hitting physical reality, really).

              What should we make these buffer sizes? There's not really a right answer, but there are definitely some wrong ones. Make it big enough to handle the expected use cases and pad a bit extra. Works exactly like a tolerance in physical engineering.

          • solveit 1989 days ago
            Turns out physical reality is robust in ways code written by mere mortals can only dream of.
            • regularfry 1988 days ago
              Physical reality is linearity and bell curves. And when it's not, you can usually get away with pretending it is.

              Software is... not.

          • adrianN 1988 days ago
            Physical systems work perfectly, until they don't. Roofs and bridges collapse sometimes, even though you'd think we would have figured them out by now. Rockets explode because materials behave slightly differently than the engineer expected.

            Building things is really hard. I don't think software engineering is in any way special. It's just that the problems we solve with software are often more complicated than the problems we solve in meatspace and failure is much more benign so that QA is not done with as much diligence.

      • buitreVirtual 1989 days ago
        Indeed. Writing software is akin to drawing the design of the elevator, not building an individual copy of the elevator. Each elevator is equivalent to one instance of the software deployed to run in one place. The same design/code is used to build/create multiple instances. So when we write new software to solve a new problem, we are designing something new and unique - not another blueprint for the same elevator.
    • eV6ahne6bei 1988 days ago
      > I always wonder if it's like this for other branches of engineering too?

      EE here. It's not. The entry bar is much higher: most of the time difficult projects are given to senior engineers. It takes a degree and many years of work to get there.

      > Or chemical engineers synthesize medicines in way nobody but a rockstar guru understands

      Been there. Not at all.

      > cellphone is made by machines designed in early 1990s because nobody was able to figure out what that one cog is doing

      Same. Part of the reason is that any physical component adds cost, weight, and so on to a project.

      In software you can throw code and random libraries at a problem and nobody will notice until vulnerabilities start to pop later on.

    • booleandilemma 1989 days ago
      We gotta finish these bridge foundations by this Friday, guys. The sprint ends next week and it’s a short week with Thanksgiving coming up.
    • mbrock 1988 days ago
      I think a lot of programming could be described as "formalizing the logic of a business" which is a very strange and interesting problem.

      Programming also involves things like "formalizing and automating the representation of knowledge in general" which is a holy grail of philosophy since Leibniz's time.

      We're always building on top of preexisting ontologies and logics which sometimes fail to even make sense, or which make it tedious to express things that we would want to consider elementary (Unix, TCP, Java, GTK+, SQL, etc).

      And we're always vulnerable to being smacked on the head by yet another "Falsehoods Programmers Believe About X" post detailing the myriad ways in which we routinely falsify and oversimplify to deal with the boundless complexity of actual reality and the horrendous details of legacy bureaucracy.

    • pier25 1989 days ago
      The best code I've written is when rewriting some code from scratch having a clear picture of the solution, or when I solve the same problem a couple of times.

      Solving complex stuff takes a lot of effort, and usually most companies do not have the resources or the motivation to go back and rewrite a product.

    • jonathanstrange 1988 days ago
      Not exactly relevant to the overall discussion but I've been dreaming for a long time about programming an elevator such that it arrives faster and skips other floors if a user frantically pushes the call button repeatedly. Same for traffic lights with push buttons.

      Maybe it's a good thing that I'm not an engineer.

      • exikyut 1987 days ago
        There are apocryphal stories about elevators that let you push/hold buttons to cancel floor selections (eg, https://www.youtube.com/watch?v=eQSdKe5kArA) or skip floors. I often forget to try them, but on the few occasions I've remembered nothing has happened.

        All elevators accept a standardized fire/security key that enables "exclusive" mode that lets you tell the lift "go to this floor" and it will do so: https://www.youtube.com/watch?v=1Uh_N1O3E4E (a reasonable sink of 1hr)

        As for traffic lights I'm aware (at least in Australia) that they all tie back to a realtime system that lets a control room instruct lights to turn green and so forth. (Presumably the immediate action is that the lights that are currently green immediately turn orange, then red after the normal delay.) You can get summarily fired for misusing this though. I've also heard (IIRC on a TV documentary-type show) that ambulances tie into this system with live GPS tracking such that lights turn green as they approach, but this is sufficiently fantastic that I'm waiting to trust-but-verify it before I quote it with confidence.

        Under normal use some traffic lights are traffic based while others have timers. The ones that are traffic based will take into consideration whether the pedestrian crossing button has been pushed and reduce the threshold value needed for the actual traffic lights to turn red. The other timer ones - like the really annoying ones outside my local mall :) - are on a fixed timer, and I try to run for that one when I know it's about to turn green, or just accept and twiddle my fingers while I wait if I miss it.

        It'd be nice if elevator simulators were satisfying to write. Although, actually... I've just realized that with Unity you could have _quite_ a lot of fun :D... (hmm, I've had GPU on my wishlist for about 15 years now, now I really want one)

    • croo 1988 days ago
      There is one fundamental difference between software engineering and any other type of engineering. It's that the feedback of the rules that govern the outcome is instant and received by multiple human senses.

      If you hit something with a hammer the result and response is instant. You can feel the nail went deeper before you look, you can hear you hit the head correctly before you think about it. You can feel the vibration and hear and feel and see the wood crack before you could read a sentence about it.

      You don't have to compile the hammer and run the hit and read the result from the screen and if you logged the right things you will see something about the result based on what you logged but not quite everything because that would be an unintelligible mess. If you compiled the right version of hammer that is.

      You don't even know for sure if you are holding a hammer. Of course that's not called a hammer but a unique tool you downloaded because they said it is the new best tool of the year - and there are several famous new tools every year. Though you cannot be sure if that tool helps in your job until you try to use it as you don't know if you are using a hammer or an excavation machine and both can be suitable for the task one being slightly bigger though.

    • yarg 1988 days ago
      I don't think it's any of the reasons that you listed. To a large extent, I blame management - they push sub-optimal solutions because it's generally the fastest way to resolve the immediate problem. This generally results in poor solutions being piled up on top of each other that becomes progressively more impossible to maintain.

      This despite the fact that if additional time were allocated to come up with a better solution, tomorrow's software would be easier to integrate on top of today's, and with a higher quality of code.

      But the reason that managers behave like this is because they can - and this I think has more to do with the ethereal and timeless nature of software than anything else.

      In the real world it simply is not possible to put together a physical artifact with the sort of compromised constructs that appears in software.

      There are costs of material to consider; costs of production. There's degradation over time.

      All of these aspects of physical constructs provide a strong motivation to produce a quality product - not out of beneficence, but because (quality engineers aside) it's cheaper and it's really the only feasible option.

    • 52-6F-62 1989 days ago
      >I wonder if engineers who designed my elevator or airplane had "ok it's very surprising that it's working, let's not touch this" moments.

      I honestly sometimes do wonder. Judging by the elevators in my building some things even barely work at times. They go down often and require parts replacements that take weeks to arrive. They're barely 3 years old.

      • krallja 1989 days ago
        > They go down often

        Isn’t that one of the two main features of the product?

        • gnulinux 1989 days ago
          "to go down" means "to be in a non-functional state" as in "Facebook went down". Compare to "to go up" like "servers are up" i.e. servers are functioning.
          • humbledrone 1988 days ago
            I think maybe you didn't get the joke
          • cuddlecake 1988 days ago
            This is the biggest whoosh I have ever witnessed.
    • abledon 1989 days ago
      Your 2nd to last one is right I think. Other forms of engineering are constrained by physical world rules. In software we have to recreate reality and redefine and model how the real world system works. Mech eng/ chem eng etc don’t have to recreate reality , physics just works and will always work .
    • timdellinger 1988 days ago
      Chemical synthesis of medicine is highly scrutinized and fairly well understood, but there are a lot of other products that large chemical companies make in great quantity that are basically made using Black Magic. Tinker with it until it works, then don't mess with it!
    • forgottenpass 1988 days ago
      We have plenty of ideas on how to write code well. The catch is that they all make development slower and more expensive (at least up front, maybe the technical debt will come back to bite, maybe not).
    • z3t4 1988 days ago
      The larger a code base grows, the worse it will get. Once in a while you get a problem that can only be solved by a "hack". And once in a while you need to make performance optimizations. Then time goes by, things are forgotten, the language changes, the OS get replaced, the machine gets replaced, etc.
    • packetpirate 1988 days ago
      One reason is that there's still a divide in understanding of the requirements to actually solve a problem between management and the engineering team. Unrealistic deadlines are given, and as a result, shit is produced.
    • sonnyblarney 1989 days ago
      Brilliance is not indication of an ability to write clean code.

      In my experience, code needs to be re-factored 3 times before it starts to look coherent and clean.

      But 99% of projects can't justify that kind of expense.

      But good code is very feasible, just takes time.

      • james_s_tayler 1988 days ago
        I'll expand this further by saying yes to all of the above with the addition that to get it perfect in my experience requires about 7 iterations.
        • sonnyblarney 1988 days ago
          Yup. Diminishing returns on that though. I'm happy with 3. Unless it's API or public code used by a wide audience then it needs a lot of eyes and has to be very clean.
  • mb_72 1988 days ago
    At one stage a company I worked for was considering licensing the code for a school time-tabling application, rather than paying the company to do the (fairly minor) changes we required to meet (non-USA) state requirements. The company was started by a couple of teachers, the same people who wrote their product. It was 10s of ks of Pascal code, but with not a single variable name or function name that made any sense; everything was A, AA, AA1, X, XX, XX2 etc. I spent a few days looking at it, then recommended we keep paying the somewhat steep cost for the modifications. Then at least if anything broke it was on them to fix it.

    Incidentally we had a small falling out with this company, and they were refusing to update their executable until this issue was resolved. This looked like affecting some hundreds of schools and their timetables. I did some checking, and it turned out their 'non-updated' executable was doing a simple date check on the local PC; if it was past a certain date, the executable refused to run. So I did a quick hack in our application that involved: - setting the local PC date to prior to the 'cutoff date' - running their executable with the required parameters, and grabbing the results - setting the local PC date back correctly

    This led to interesting negotiations as they were puzzled why their 'gun to our heads' no longer appeared to be working, and things were resolved to the benefits of both parties soon after.

    • meshugga 1987 days ago
      Code obfuscation + blackmail negotiations ... awesome company to do business with.
  • LanceH 1988 days ago
    I took over a Perl project where every SQL call was an exec to a java program which would make the query.

    The largest madness was a J2EE mess where persistence was achieved by taking your current object, sending a message bean to the server, it would touch the database and return the result which was being polled for (making it synchronous). The amazing thing is that the client and the server were the same J2EE instance. So Class A could have just called Class B. Instead it was A -> turn it into a message bean -> send it to the "server" (same machine) -> unwrap it into A again -> transform it into B -> message bean it back to client -> unwrap into B.

    Literally three months of 8 people ripping all of that out and replacing it with A.transform() // returns B

    Oh, and at the same time, none of this was in source control. It was being worked on by 12 people up until that point. They didn't know who had the current files which were in production. So my first job was taking everyone's files and seeing which ones would compile into the classes being used in production, then checking those into source control. Up until then, they just kept mental track of who is working on which files.

    • not_kurt_godel 1988 days ago
      Reading this gave me a thought, which is that I would like to see a series of artistic renderings of physical analogies for such sorts of Rube-Goldberg-Machine-esque software monstrosities. This one sounds like it'd be an assembly line where in each step of the process, the piece-in-progress would be packaged in a shipping box and sent through the mail C/O the person manning the next station 15 feet away.
    • C1sc0cat 1988 days ago
      Was there a reason they did not use DBI to call the code ?
      • LanceH 1988 days ago
        I had been told he bragged about it being "job security" since nobody could understand what he had written. I never spoke to him directly as I was his replacement...

        The java mess was a pattern copied from within the company where it was used correctly. The main product of the company was built around some very complex scheduling software. It was a black box and communication was entirely through inter-server beans. This was copied to intra-server, which makes no sense at all, and it was used everywhere.

        • mattkerle 1987 days ago
          "job security" coupled with "I was his replacement" says that his plan worked about as well as his code...
          • LanceH 1987 days ago
            Indeed. It was quite the entry to the company for me as well. "What are you working on?" "Fixing Dave's code." "Good luck."

            But my most auspicious entry to a company was where the guy I was replacing had been arrested for stealing data from a competitor. My first two days were spent recovering logs, downloads and other traces of what he had done and handing it over to the lawyers. (the circumstances were not mentioned in the hiring process)

    • Avalaxy 1988 days ago
      IMHO sometimes it's fine to use messaging within an application that runs on 1 device, purely for decoupling purposes.
    • geekbird 1987 days ago
      "I took over a Perl project where every SQL call was an exec to a java program which would make the query."

      and

      "Oh, and at the same time, none of this was in source control."

      Owwww, that's painful! 12 people spaghetti Perl that did its DB lookups via Java and non of it in source control?

      Owwww, owwww, owwww....

  • mattzito 1989 days ago
    My first job was sysadmin of a third tier ISP back in the dialup days. The account management and provisioning system that ran EVERYTHING was probably close to 100k lines of csh. Everything was done via a UI that the shell generated as a sort of curses-style interface.

    What was horrible about it was that it controlled everything from who got a website, active domains, what POPs users could dial into, metered billing, you name it. And it did all of this by manipulating flat files of pipe-delimited data on a central server, then rcp’ing those files to the various machines, then rsh’ing to the various machines and kicking off THEIR scripts, which parsed the source files and generated their own files, which called another set of scripts that parsed THOSE files and generated the software config files.

    This included doing things like updating init scripts so that new IPs got added to interfaces, and what email server a user was provisioned on, so it had to generate new exim configr with routing rules.

    All this to say that it all worked, but I dreaded having to go in to manipulate anything. Adding a server at least had a dedicated procedure so that was fine, but anything else was a nightmare.

    Case in point - as part of a gradual plan to remove this nightmare, I swapped out the radius server that they were using for one that could support a database backend, and modified the local config generator script to make a new config for the new software as a stopgap until I could get it into a database.

    The config file had a series of fields that just had numbers in them, and after much digging, it seemed like that controlled whether a terminal dial in user was presented with a menu of options, and what options. I had to reimplement that logic for the new software, made a mistake, and accidentally removed the option for UUCP for the 10 customers that were still using UUCP. One of them was on an ISDN line and their mailer decided to continuously redial looking for the UUCP, tacking up thousands of dollars in carrier rate charges for the weekend that it took anyone to notice something was broken.

    • sytringy05 1989 days ago
      I had a similar job to this when I was a grad.

      I got given an IDE that was written in korn shell to maintain. Not as mission critical as this sounds, but was the only way to edit, compile, link and deploy around 6000 COBOL programs that made up a very large and expensive financial services platform. It also integrated with the SCM (unix RCS!), did checkout, checkin, merging, branching and all manner of amazing things.

      There was probably 30 devs who used it, all running on a HPUX server.

      It was very powerful, but a total nightmare to look after.

    • abledon 1989 days ago
      Wow this is legendary. I’d love to direct a short film or tv series that revolves around an IT/software team using a massive csh codebase like this. I’d love to generate some training montage / diagram sequence shots of the system being built by the characters maybe make some cool blender / adobe premiere overlay screen splits of the high level architecture as the team references certain aspects of the system
      • deevolution 1989 days ago
        Make it like the IT crowd, but for a software team. That would be golden.
        • MrDOS 1988 days ago
          I'd take something less fictionally dramatic and more along the lines of reality TV (ala home makeovers/Kitchen Nightmares/Bar Rescue): a team of crack engineers untangling the mess and laying out the best practices for future development. The concept is even ripe for booze sponsorship.
    • ebcase 1989 days ago
      This is incredible. Terrifying, but incredible.
  • Rebelgecko 1989 days ago
    The project around 10 million lines of code. Some of it was written in a very specific version of Fortran that was a PITA to compile. One fun experience was opening a file and seeing it was created on my birthday. Not in the sense just matching the day and month of my birtdhay. The code was literally written on the day I was born.
    • jedimastert 1989 days ago
      I recall rewriting a several thousand line program that was written in a preprocessing language that compiled to fortran that was homerolled by some random guy that was just so ahead of the game at the time that he basically wrote an extension of fortran with all the macros he made and rolled into one thing.

      The rewrite was 100 lines long.

    • dotdi 1988 days ago
      > The code was literally written on the day I was born.

      That made me LOL.

  • pengo 1989 days ago
    Ten years ago I was called in to remediate a new web application which had been subcontracted to an Indian development company. The PHP developers who'd put it together evidently didn't know about classes, and each page in the application was hundreds, sometimes thousands, of lines of spaghetti code, most containing the same duplicated (but subtly changed) blocks providing database connectivity etc. Security had not been a concern either; passwords and credit card details were stored unencrypted in database text fields.

    I was called in because, while most of the application worked, some of the requested features were not yet complete. When I made my initial recommendation (scrap the whole thing and start again) I was told the client's board would not agree to that because of the money already invested and the fact that the board had seen a demonstration proving that "most of it worked".

    It took two developers eighteen months to beat this sorry mess into a maintainable state while ensuring it remained "usable". It would have taken one third of that time to rewrite it from scratch.

    • joeax 1989 days ago
      > most containing the same duplicated (but subtly changed) blocks

      I had a similar experience with an offshore company. This was during the early-mid 2000s, at the height of the offshoring era when $10/hour programmers in India were aplenty and everyone felt their job was soon to be outsourced. Turns out, no joke, they were being paid per line of code.

      Despite having to maintain the heap of crap, I was amazed at the brilliance of their maniacal dark-art ability to implement as little functionality with as many lines of code possible. It was like code golf in reverse.

      • exikyut 1988 days ago
        What sorts of techniques did they use? I'm very curious to see/hear examples.
        • brianpgordon 1988 days ago

              function isTrue(v) {
                var result;
                result = v;
                if (!isFalse(result == true)) {
                  return result;
                } else {
                  return isFalse(result);
                }
              }
          
              function isFalse(v) {
                var result;
                result = v;
                if (!isTrue(result == true)) {
                  return isFalse(result); // Tail recursion so this is fine.
                } else {
                  return result;
                }
              }
          
              if (isTrue(myBool) == true) { // TODO: Could this be refactored to isTrue(isTrue(myBool))? 
                return true;
              } else if (isTrue(isFalse(myBool))) {
                return false;
              } else if (isFalse(isTrue(myBool))) {
                return false;
              } else {
                return isFalse(isFalse(myBool));
              }
          • strikelaserclaw 1988 days ago
            i hope to god that this is just code you made up to be facetious.
            • brianpgordon 1988 days ago
              Haha, yes I made it up. The infinite recursion should have been a dead giveaway :P

              There's also a bug on line 7 that I didn't notice - it should be isTrue rather than isFalse. I can't edit the comment anymore to fix it.

          • tomazio 1988 days ago
            This is literally insane.
    • aussieguy1234 1988 days ago
      10 years ago PHP didn't have classes
      • cpburns2009 1988 days ago
        PHP 5 was released 14 years ago in July, 2004. One of its new features was OOP. Classes were supported at that time. In fact, a comment from 14 years ago shows an example class [1].

        [1]: http://php.net/manual/en/language.oop5.php#46290

      • rpeden 1988 days ago
        Sure it did. Classes in PHP showed up in version 4.0 18 years ago.
    • abledon 1988 days ago
      Why didn’t you rewrite it from scratch and then just replace the codebase in 1 giant commit ? Seems doable if you have the balls
    • edoo 1988 days ago
      I had a very very similar experience. I saw the craziest code in my entire career while debugging performance issues. Even though they were using a PHP MVC framework, they were pulling every record from a db table to iterate over to find the record using PHP string compare functions. I still can't believe it. The dev shop I worked for back then was even in the habit of hiring multiple teams for the same project in the hopes one actually completed it and it was still cheaper.
      • Rjevski 1988 days ago
        > pulling every record from a db table to iterate over to find the record using PHP string compare functions

        As horrible as this sounds, this is actually good from a refactoring point of view because it should be straightforward to rewrite it to use actual queries.

        • edoo 1988 days ago
          Perhaps technically true, but the entire point of the MVC is you can do simple record retrievals with a single line of code using auto generated active record models. One line vs a monstrous dirty function isn't acceptable.
        • exikyut 1988 days ago
          D':
  • dustinmoorenet 1989 days ago
    I use to work on hospital lab software.

    * It was over 20 years old by the time I started

    * It was written in Fortran

    * Variable names were single and double digits

    * Each fortran program would run in isolation but had a shared memory process

    * It was formally a terminal program but a weird Java frontend was created so everything looked like Windows GUI

    * All program names were four letter acronyms

    * All data was stored in fixed width binary "flat" files

    * It previously was under CVS version control, but each install slowly drifted apart, so each site had it own unique features and bugs.

    * I once had to move a significant feature from one install to another using only patch files generated from the work done on the original install.

  • ohthehugemanate 1988 days ago
    The worst for me was a rescue project: a site for a US tech sector Public-private partnership. Nothing too complicated: recurring donations, paid events, a small members area. They had sent it to an Indian firm to build in Drupal 7 - not a lightweight system to begin with.

    I would like to say "cue the stereotypes for Indian developers" and we could all have a good laugh. But no. This is more like Heart of Darkness. They must have traveled to the darkest corners of the subcontinent to find a mind capable of the eldritch horrors we found there. We started keeping a wiki of design patterns, to save us WTF time. Here are a few.

    * Memory management. What's that? This site required 16 GIGABYTES as a PHP memory limit in order to load the front page for an anonymous user.

    * Security. What's that? Part of the reason it required so much memory, is that it would include a complete debug log of the site's entire transaction history, including PII and credit card numbers, in every session. Meaning any vulnerability was a critical vulnerability.

    * They would arbitrarily include the entirety of Zend framework to access a single simple function. This happened several places in the codebase, with different versions of Zend committed.

    * Can't reach the ERP to get the price for an event? Let's set it to $999,999.00 and proceed with the transaction.

    * Invoice numbers were random numbers between 1-1000. Clashing numbers meant a fatal exception that would fail to store the invoice or payment record... But not until after payment had been processed. Birthday paradox means this happened a lot.

    * The developers used arcane bits of Drupal API to do totally mundane things. Like, if you know about hook_page_alter, you know there's a setting in the UI for the frontpage. But we'll just use hook_page_alter instead.

    * Write queries using Drupal Views, rewrite the query in code, override the views query with their custom (identical) version using an unusual API hook, just to add a sort.

    I could go on, but I think you get the picture. Eldritch horror of a codebase.

    • Avalaxy 1987 days ago
      Holy crap. You already earned yourself an upvote for this sentence:

      > * Memory management. What's that? This site required 16 GIGABYTES as a PHP memory limit in order to load the front page for an anonymous user.

      • andyhasit 1985 days ago
        I'm tempted to log out and created 5 new accounts to upvote for each bullet point :-D
  • lucozade 1988 days ago
    I used to own a C++ application that was a morass of abstractions and indirections so it was impossible to reason about. It took a number of hours to compile.

    On one infamous occasion we were making a, relatively small, patch release. The debug version worked fine but the release version crashed systematically. Even when we backed out all the changes we had the same behaviour. We were screwed.

    Until one of the team had a bright idea. She stripped strings from the debug build and tested it. To our surprise it not only worked, it was only slightly bigger than the previous release version and it was also slightly faster! We shipped.

    This experience was the trigger to make me go all-in on a full re-write that I had been contemplating. One of only a couple of times in my career that I've made that decision on a major piece of software.

    The re-write was a huge success. It was also about 10% of the original in terms of LoC. The day our testing finished, we held a ceremony where we deleted all the old code from the current version.

    This caused a slightly different issue. At the time, code metrics were starting to get fashionable but LoC wasn't yet the pariah it became.

    So, a couple of days later I got a concerned call from the metrics guys. Apparently, we had deleted more code than all the other teams combined had added in the previous measurement period. This caused their metric calculation to barf. Their solution? We should add all the code back in! This led to a somewhat heated argument that ended up with me persuading them that deleting code was good and they should, at least, abs(LoC) it. It didn't make the metrics any more useful but meant that we had an application we could reason about. Happy days.

    • shoo 1988 days ago
      > Apparently, we had deleted more code than all the other teams combined had added in the previous measurement period. This caused their metric calculation to barf. Their solution? We should add all the code back in!

      ah yes, when bad metrics become targets

  • zhengyi13 1989 days ago
    From 10+ years ago when I worked at PayPay, webscr (the frontend at the time, where you'd log in) was a total of 2GB of C++ that would get compiled into a 1GB CGI executable, deployed over some ~700+ web servers.

    Debug versions would never get compiled, as I'm told the resulting file was too large for the filesystem to handle.

    Apparently a great deal of the code was actually inline XML.

    They knew this was a bad pile of technical debt, too: at one point, a senior/staff engineer gave a company presentation where they brought a fanfold printout of the main header file for this monstrosity, and literally unrolled it across the entire stage.

    • cgijoe 1989 days ago
      I think you meant to say PayPal?
      • zhengyi13 1988 days ago
        Oh whoops, thank you, yes!
      • heroic 1988 days ago
        PayPay is a Japanese payment company
    • busterarm 1989 days ago
      Job security.
  • thepp1983 1989 days ago
    I've worked on a CMS that was partially done in .NET, Iron Python and used an XSLT templating system to generate HTML for the front end.

    The architecture looked like something from the Early Java days.

    The system used Iron Python / C# in the following way.

    1. A web request would hit the CMS 2. There was a massive switch statement to work out how the query would be rewritten 3. If it the url was prefixed with processor it would attempt to find the processor in the db. 4. The code would then find the python script associated with the processor. 5. The processor would then spin up a command line instance in a hidden command window on windows server. 6. The processor would have to return XML that had to be built up using strings (not element tree for you). 7. This would return the XML to the C# which would then try to render into the XSL Transform.

    If at any time this failed. Silent failure. There was no way to debug easily (There was a magic set of flags that had to be set in Visual Studio or wise you couldn't debug the python scripts).

    To get the software to build on a new machine. It took a contract 4 months to reverse engineer an XP machine. None of it was documented anywhere.

    It used ImageMagick to generate thumbnails on the fly which doesn't work to well with windows server.

    The lead engineer was an alcoholic. He used to go to the local pub for 4 hours in the middle of the day and come back smelling like a brewery.

    • goostavos 1988 days ago
      I actually laughed while reading this. I've encountered something eerily similar, but it was a Ruby app. Almost exactly your steps 1-3, however, what was returned from the db were ruby method names, which were then invoking other pieces of the code through some deep, twisted ruby magic.

      Uncovering what the hell was actually happening was like peering into the mind of a psychopath.

      Once myself and 3 other engineers spent 4.5 hours trying to figure out how to send an email from this treacherous app (a feature which had stopped working months before we showed up). After those 4.5 hours, none of us even came close.

      To top the whole thing off, page load times were in the minutes. The standard joke from the users was that they "get to take lots of coffee breaks."

      Much to the (new) manager's credit (and my sanity), we got buy in to just build a new version and let the old one die. I left the team shortly after the 2.0 beta launch, but God I wish I would have stuck around a little longer so I could have seen the official end of life for that calamity of an app.

      • thepp1983 1988 days ago
        In a weird way I kinda like these totally mental systems. It really ups your skill level for debugging if nothing else.

        I've worked with lots of proprietary CMS systems and I kinda got into a groove with working with them. I knew exactly how to manipulate the system within the parameters of said system.

        I kinda found it challenging. When a system is well built. I find it boring.

    • antoineMoPa 1989 days ago
      When you are just above the Ballmer peak.
      • thepp1983 1988 days ago
        We had a developer meeting a few weeks before he left the company and he said "I designed the system in the pub" ...

        I like my beers but this guy was on another level.

  • TheRealDunkirk 1988 days ago
    I wanted to tackle a rewrite of a legacy LISP program, used by AutoCAD, to take in a list of 3D coordinate points, and output a DWG of a 2D projection (from which to create a jig on which to assemble a prototype). Cool stuff, right? My intent was to create a "proper" script in AutoCAD to actually create 3D objects from the input data, and let the program do the projection to a 2D drawing. (I'm still not sure if that would have been possible; I never got that far.)

    I had never written in LISP before, so I bought a book!

    I was having trouble reading the program on the computer, so I printed out the roughly 15,000 lines. Not a lot, I know, but it was about an inch and a half thick stack of paper. I started going through it.

    It consisted of LOTS of subroutines. Thousands. Each one neatly formed; no more than a couple hundred lines. It gave me hope.

    It read in the text file, created a blob of a string, and then passed that blob to the first subroutine. Then it passed to the next subroutine. And then the next, and so on. As far as I could tell, it never called a subroutine a second time, and it never returned to the starting method. Given the strangeness of LISP, I couldn't figure out what it was doing, or why.

    The guy who wrote it had retired, and we didn't get along anyway, so I didn't try to chase down what his thinking was.

    I gave up.

    To my knowledge, they're still using that program 17 years later.

  • le-mark 1989 days ago
    Bad is not a good measure. But taking 'bad' to mean large, convoluted, zero documentation; I was once at a financial services company that had a 25 million+ LOC mainframe cobol application that had been under active development since 1969. This was a batch and CICs system. It was spaghetti on every level; database (db2, vsam, isam), the screens of the app, the batch jobs, the cobol. It was truly astounding. It was also the source of about $500 million in revenue. They were doing software as a service in the 1970's. It's still going today. Customers in that space don't have many options.
    • pmarreck 1989 days ago
      Sounds like a market opportunity. What vertical market is this, more specifically?
      • AlexCoventry 1989 days ago
        The only homeless former software developer I ever met had once been a cobol specialist.
      • le-mark 1987 days ago
        The domain is mutual fund accounting. The system in question had grown organically over decades to encompass everything related to mutual funds; account record keeping, shareholder statements, broker dealer recording keeping, commission payments, cost accounting, you name it, they do it.

        I agree it is an oppurtunity, but the barrier to entry is very high. Reaching feature parity is a multi year project with a large team and domain experts.

      • vaylian 1988 days ago
        Market opportunity? Sounds more like the best way to throw away your sanity. Some things are just beyond repair.
        • pmarreck 1988 days ago
          Since when is a greenfield app on a new product addressing a market with only old entrenched players “throwing away your sanity?”
          • vaylian 1987 days ago
            Sorry, I did not realize you meant to create a new software from scratch. I thought you were talking about supporting/extending/refactoring this old pile of madness. Thanks for the clarification!

            I suppose you were referring to the statement "Customers in that space don't have many options". Your statement is reasonable and there might be market opportunities to create new software. But I somehow have the feeling that it would take a tremendous amount of time to recreate something that checks the same boxes as the old system. But I do not work in that sector so my judgement might be totally off.

            • pmarreck 1987 days ago
              Thing is, a competing product does not (at launch) have to match the entrenched one feature-for-feature. It just has to tick the MOST IMPORTANT boxes while being advantageous in other ways (such as being built on FAR newer tech, more reliable etc.)
              • vmchale 1985 days ago
                Being built on newer tech isn't an advantage from a customer POV. It's only an advantage when the newer tech is better, and tbh I have a hard time believing that you'd match the performance and security of a mainframe that easily.
                • le-mark 1982 days ago
                  Being an order of magnitude cheaper would be compelling to these same companies though.
      • vmchale 1985 days ago
        It's only a market opportunity if the software is bad from a customer POV.

        There's a lot of engineering that goes into/went into mainframes.

    • cntlzw 1989 days ago
      How are the changes to rewrite this castle of shit? Being a developer I would hate working on that pile of garbage but judging from management? Well if it works it works. Being pragmatic ain’t that bad.
      • wirrbel 1989 days ago
        With that kind of stuff, clients usually also depend on defects and accidently behavior and they are not happy touching their own Client-Code integrating the service with their systems because it is of the same quality as the service.
    • pier25 1989 days ago
      Sooo... what space is that?
  • Softly 1988 days ago
    In my first year of university, I wrote a Java swing game which had classes for literally every part of the GUI (I was young, carefree and unwilling to yield on the principals of OO I'd been taught earlier in the year. I think it had somewhere in the region of 100 classes. Of which only about 3 had the real logic in them.

    Now I've put it in perspective, I went on to an intern position between years 2 and 3 of uni. I was handed a lovely piece of code which had:

    - Around 300 classes - 3 or 4 layers of nested generics - Factories, factory factories, generator factory factories - 90% of the parameters were passed in from the build engine running the code, so it was impossible to run locally, ever. - 0 tests - Some 100 pages of documentation, which had been lost until I was about halfway through my placement (and mostly documented how to set up and run it, not how to maintain it)

    Seriously, this thing was designed to the extreme, made to be generic to every single scenario in existence.

    So what did it do? It took items from a customer facing system, transferred it onto the internal work tracking system. Then when they were updated in the modern system, mirrored the relevant updates back to the legacy.

    The best part? Every time the internal work tracking system updated (once every 6 months), this thing broke horribly and it was practically impossible to fix. Even if you managed to set up stuff so you could work in a development environment, it still connected to the customer facing system, so you had to be incredibly careful what you did during testing.

    It wasn't the biggest in terms of LOC, but it astounded me just how much effort (apparently the guy who wrote it squirreled away for a year to write it, then moved to Canada, and was famous in the department for having one too many beers during a outing to a local Indian) went into designing this behemoth.

    I still have the occasional nightmare about it!

  • Rjevski 1988 days ago
    I once had to look at a client’s code to determine if/how we’d go about taking over their application. Their only developer threatened to quit and this is when they realised it would be best to outsource this and reduce the bus factor.

    It was a huge folder (not repo - and there were zip files of different “versions” of the code in there). The main monster was a huge Visual Studio solution with hundreds of targets, one would be an application for entering some data, the other was for entering data from a hardware device (a scale if I remember right), etc.

    The main source of truth was an MSSQL database to which all these apps would connect as root. There is no backend as such to ensure access control & consistency, and any misbehaving app could essentially trash the entire DB.

    Database credentials were hardcoded in every app’s main entrypoint, with earlier “versions” of the credentials commented out below.

    I thought that surely these must be either staging DBs or at the very least there would be network-level access control meaning the DB wasn’t accessible from outside... but no - I managed to connect to their production DB as root from a random, untrusted location. I do not know if MSSQL uses encryption by default but I would bet good money there was none and they were essentially connecting to their DB as root, over plaintext, from hundreds of different locations across the country without any kind of VPN.

    In terms of code you obviously have your standard & expected “spaghetti monster” with UI & business logic scattered everywhere. What struck me the most was an empty exception handler around the main entrypoint.

    In the same folder there was also source for an iOS app. Didn’t look at it but I don’t see any valid reason why this should be in the same place as the Windows apps.

    Thankfully I no longer work there and even if I were I had no major C# experience (which gives me a very convenient excuse not to touch this mess).

    • HeyLaughingBoy 1988 days ago
      Ha!

      Are you me? :-)

      Had almost the same experience, minus the database. Friend of the owner wanted to buy a company and asked us to evaluate their code to see if it was maintainable enough to add new features. I got a zip of hundreds of firmware projects each representing a different version. They were all on the same basic platform but with different hardware features #ifdef'd, or customized for a particular customer. The code itself wasn't that bad (not that good either!), but their developer clearly had no idea what Version Control meant.

      In the end I gave the thumbs up and he bought the company, then ended up having to redesign the product from scratch since much of the originally designed-in components were no longer available. He did his Due Diligence for the software, but ignored the hardware side!

    • delta1 1988 days ago
      > In the same folder there was also source for an iOS app

      A true monorepo

  • 8fingerlouie 1988 days ago
    I worked for a company that makes sortation devices (conveyors etc), and i inherited an old product from a previous colleague.

    The software was largely a standard base with modifications done for the individual customer to suit their business needs.

    In this particular project, it was a batch sortation, meaning we received a large batch file from their mainframe, which would then be parsed and executed by the controller software.

    Everybody feared this particular project, and estimates on new functionality were sky high, but I didn't think much of it until i had to modify the batch parsing code. I was met with 22000 lines of C code in a single function. This single function was modified multiple times each year, usually adding more code in the process.

    It took me the better part of a month to refactor into "manageable" chunk sizes, and in the end i was left with a 1700 LOC function that was still "too big", but nobody really understood how it worked, and we couldn't test it, so i just left it at that.

    After my refactor could implement new functionality somewhat faster, but in the end it was still a very complex algorithm, so despite being in smaller functions, you still had to be very careful when modifying it.

  • badjobthrowaway 1988 days ago
    (I am a regular of HN. I don't want this attributed to my acct.)

    Previous job. Was sysad. This code runs most of the academic US internet.

    Everything was Perl5. There were over 150 different major applications written with it, ranging from 40 lines to 500k lines. The older the recent commit, the worse. Touching any of this would cause errors, either in itself OR in associated applications! You'd be working on thing A, get it working well, and 4 weeks later thing B would fail horrendously.

    The worst was a tie between a variable that was a 6 line long SQL query, which was packed to the brim with function calls that ended up expanding a query to something like 50 lines long.

    The tie was a gem in code that wasn't touched for 7 years. This wasn't at the top, or even in a config file. It was hardcoded in the middle of the perl5 program...

         $db_server = (servername);
         $db_user = root;
         $db_password = (password);
    
    Other dishonorable mentions are as follows:

    1. No primary keys for the main database....

    2. Goober had the idea of storing pictures in said MySQL database. 70 GB of pics...

    3. Redhat 4 still in production, along with RH 5.

    4. Its everyone for themselves. The goal is you hobble it along enough for the next oncall. Let them get hit with it.

    5. Running iron from 10 years ago. Contracts pull in $$$, but you're dealing with paleo-datacenter crap

    6. Just retired LTO3 tapes. Now they have "shiny" LTO5....

    • zimpenfish 1988 days ago
      You know what's terrible? This could almost be any one of about three of the Perl contracts I've had in the last decade (apart from the US part, obvs.)
      • badjobthrowaway 1988 days ago
        Yeah, the more I dig around how perl shops work, this seems endemic.

        Now, I work in a heterogenous Windows/Linux shop with non-paleo hardware. We're even deploying on AWS in a limited fashion.

        This place has its warts too, but everywhere has something. But that previous place... I'm surprised it still hasn't crashed and burned. Their networking was solid tho. Just anything with Linux was a tire-fire.

        And as one more gem, there was a program that terrified the fuck out of me. A fellow engineer showed me this update tool that would update a router remotely. Great. Well, if you add the -n (customer) flag with the customer number, it would update all the routers for that customer!

        It was spectacular, and terrifying at the same time. I asked them for their testing procedure, and it was a -t (customer-testing). If they forgot the -testing, well.....

    • hjek 1988 days ago
      > 2. Goober had the idea of storing pictures in said MySQL database. 70 GB of pics...

      Is that really a bad idea? The pics would not be any smaller outside the database.

  • binalpatel 1989 days ago
    At a former employer someone created a framework to try and create a generic process to process and load data (i.e. ETL). It was written with the best of intentions, but was terrible.

    This framework was given to an offshore team, and they were told to use it to farm out hundreds of requests. The framework was inflexible enough that they each started adding snippets here and there to make it work for those projects, with no code review.

    When I joined there were well over a hundred different projects, all using this framework at the core, most having little bespoke tweaks to make the code work. Every failure was essentially a new failure in a new way.

    It was a useful experience - it's one of those experiences that teaches me via a negative example. This was the worst example of how "roll your own" resulted in incredible technical debt I've seen.

    • CoolGuySteve 1988 days ago
      Had something similar happen to me. I gave a new guy a task of 'figure out luigi or airflow to help us process nightly data better than crontab'.

      He came back with a custom python/c++ framework where literally every function was called handle(), and in c++ it used type inference to figure out which handle() to call.

      Apparently this is what he wrote for his last firm.

      We quickly found something else for him to do but he didn't last much longer after that anyways.

  • andai 1988 days ago
    My own! For a university assignment we had to make a simple Excel clone as a group.

    We divided the work and I ended up working on the formula parser. I spent a week thinking about it and couldn't figure it out (I wanted to work it out from scratch). Eventually I had a flash of insight: I know how to parse simple formulas, so I can use string replacement to recursively rewrite a formula until I can parse it.

    By the time I had written all of it, I didn't understand how it worked anymore, but it did work!

    FormulaParser ended up being longer than the rest of the codebase combined, and I eventually learned the other groups did it with a regex and ~50 LoC...

  • jacquesm 1989 days ago
    Most large enterprise systems that are more than a few years old tend to be pretty bad, and almost all of them work just fine, as long as you respect the 'dragons be here' signs and you don't attempt to fix that which isn't broken.

    Millions of lines of code (or even 10's of millions of lines) are really not all that exceptional. The original programmers are typically in some home or are pushing up the daisies.

  • letientai299 1988 days ago
    I came here was about to complain about some projects of my previous companies. But reading this thread makes me feel ashamed for getting angry with them, compare to how much more pain you guys went through.

    Thanks for all the sharing. Young devs (like me) should really read and appreciate this thread.

  • logfromblammo 1989 days ago
    My previous job. More than a million lines of code for a glorified CRUD app, with more than a $7 million annual budget.

    Accruing technical debt was a process feature. More bad code that everyone is afraid to touch means more budget for terrified developers and testers, and insane networked database design means more budget for servers and sysops. The fear leads to meetings, the meetings lead to suffering, and suffering leads to the dark side. It still works according to spec, and is human-fixed quickly whenever it doesn't, but the poor quality of the codebase is likely costing at least $2 million per year.

    • abledon 1989 days ago
      Read that last paragraph in the emperor’s voice from star wars episode 6.
  • farhanhubble 1989 days ago
    More than a million lines with a single file containing more than 100,000 lines of spaghetti code consisting of macros and badly indented C code in a single function! This powered the GUI for an entire generation of low cost phones (pre Android era).
  • overcast 1989 days ago
    I'm guessing by bad, you mean ugly. I'm fairly convinced there is no "good" code, or that it's such a minuscule subset that you'll never likely encounter it in your career. Every year I look back on last years stuff I've written, and I can find ten ways to clean things up.

    In my opinion, if it works, and provides the utility it was designed to bring, then it doesn't matter. If it makes money, then who really cares!

    • nathan_long 1989 days ago
      > In my opinion, if it works, and provides the utility it was designed to bring, then it doesn't matter. If it makes money, then who really cares!

      This makes sense. However, if the code 1) is hard to understand (according to the developers available) and change and 2) needs to be changed, it costs money.

      Eg, read about the expense banks are now incurring trying to maintain COBOL systems. Whether the code is "bad" is debatable. But the fact is that they have a hard time finding people who can work on it.

      • overcast 1989 days ago
        Sure, but you're acting under the premise that "if we just did it 'right' the first time, we wouldn't have this mess". What I'm saying is that only under very few circumstances does it ever workout that way. Particularly with long standing systems and their software. It just builds over years, nothing you can really do about it.
        • nathan_long 1989 days ago
          > Sure, but you're acting under the premise that "if we just did it 'right' the first time, we wouldn't have this mess".

          I think you're right that every long-lived code base will have warts. And I don't think that means that the original builders were wrong-headed.

          But if you've got a decades-old system that nobody understands anymore, you've got a huge liability. You can't ship features to compete, you can't fix bugs, you can't comply with new regulations. You can't even rewrite confidently because you don't know what the old system does.

          There must be things you can do as a code base ages to keep it maintainable, allow incremental rewrites, etc.

          • ilovetux 1988 days ago
            You may not want to reinvent the wheel, but you have to change the tires.
    • koboll 1989 days ago
      >I'm guessing by bad, you mean ugly. ... In my opinion, if it works, and provides the utility it was designed to bring, then it doesn't matter. If it makes money, then who really cares!

      "Ugly" is subjective; I'd define "bad code" as "difficult to reason about". If you're introducing someone new to the project, how long do they have to stare at it until they can grok it? Good code is code that makes sense, where things are documented and clearly named and encapsulated and have a flow that makes them understandable.

      This doesn't necessarily mean it isn't ugly. Often, it's uglier than "clever code" which does something succinctly but less obviously.

    • sooheon 1989 days ago
      > If it makes money, then who really cares!

      If it makes money now, you might care about continuing to make money in the future. Being able to reason about and change your product should help with that, but so will more money to throw at it, as this thread shows.

  • itsreallyme 1989 days ago
    300,000 lines of Python…

    The company had bought Ellington, a Django based CMS, but the team basically rewrote the entire thing using multi table inheritance (unintentionally), so everything in the database had two copies, and we had over 70 tables, hundreds of gigabytes, disasters every week and tons of bugs... I discovered this more than a year later and nobody was even aware. Even the DBA wasn’t aware the tables were duplicates.

  • dfinninger 1988 days ago
    Not the largest, but the most insane.

    At an old employer there were 15,000 lines of batch script across 14 .bat files on a Windows laptop. Old director of IT used it to onboard new customers. It basically copied a DB and turned "CHANGE ME" in some columns to the client's name.

    It had it all. 5k lines of date validation, 3k lines of "UI", 400 goto statements, hard-coded passwords, versioning by incrementing the file names (leading to a bunch of code that was never called), and to top it off a static IP granted to the laptop that used as a part of authentication.

    Took me two weeks to unravel it and replace with ~20 lines of Ruby.

    Later, all of my complaining on Facebook led an old professor to invite me back to give a talk on the importance of code quality!

  • neya 1988 days ago
    A while ago, I was tasked with maintaining production code written in R by an enthusiastic junior developer. He loved R so much that it blinded his ability to use the right tool for the job.

    Instead, he wrote web applications in R, instead of Python or Ruby, which my company had many developers who had expertise in, and eventually handed it over to to me. He even persuaded our bosses to invest into R Studio Server and had an instance installed in one of our machines. It's not only the choice of the programming language that made me furious, it's also the quality of code. He also mixed up snake case and camel cased variables all over the code. In addition, the same name would refer to different things, eg. `abc` and `Abc` and `a_bc` would mean totally different things. And stuff that could be written in a simple Sinatra or Flask application were written in R Shiny.

    As a non-R person, I quickly learnt the language, (while mentally cursing it all the way for the bad choices it had made and the terrible inexplicable syntax it had) but getting used to this bad code was quite a challenge. We had several top tier clients whose reports were critical and reliant on this R code and it would frequently, randomly fail while maxing out on memory, no matter how much you threw at it. Debugging was another issue and I struggled with this codebase for 8 months while this junior developer had moved on to other technologies.

    Eventually, my main role almost switched to devops which I hated, because I enjoyed writing web applications and good code that doesn't require maintenance nor devops much. Eventually, I realized I couldn't take responsibility for this anymore as it would cost me my reputation and I really didn't like the way the company handled the situation as well. They were quite supportive of the junior dev encouraging him to move on to newer technologies while he half-assed everything and threw it on other people's heads who already had other responsibilities. They did this so that they could show off at meetups "We use the latest tech stack..blah blah" while adding 0 value for clients.

    So, I quit the company, along with a dozen others and never looked back. But, I did learn quite a lot..my my.

    • hjek 1988 days ago
      > In addition, the same name would refer to different things, eg. `abc` and `Abc` and `a_bc` would mean totally different things.

      That's not too unusual. In Java `Camel` would usually be a class and `camel` an object, and in Prolog `Camel` is a variable wheras `camel` is an atom. Not sure about R though..?

      > And stuff that could be written in a simple Sinatra or Flask application were written in R Shiny.

      I don't know R Shiny, but the examples looks neat and simple.[0] Are you sure this is not just a case of "I don't like X" rather than the code being bad?

      [0]: https://shiny.rstudio.com/articles/basics.html

      • neya 1987 days ago
        Well, that nomenclature works well for classes and objects, but if you use mix up cases within an object, I think we call can agree it's really bad code?

        As for R Shiny, those examples all look fine, but wasn't the case with my code.

  • pmarreck 1989 days ago
    Here's a metaquestion:

    Is it possible for any codebase to NOT eventually (given enough time) become a crufty pile of garbage?

    I suspect (but have no real evidence... yet) that SOME of this spaghetti garbage is due to the traits of procedural, OOP, mutable languages. But this would then imply that things like functional-language codebases have much longer lifespans... and I don't have evidence for that... but I'm hoping someone can chime in

    • wirrbel 1988 days ago
      I think it's not like a natural law. But several things help with it. Changing maintainers and committers for example. When the original ideas are lost, so is the structure of the code.

      Also the quality of the test suite makes a huge difference in combination with the willingness of developers to refactor. No testsuite automatically means no refactoring. With a testsuite the question remains whether the organization penalizes or encourages larger code changes (sometimes with old code bases managers demand pinhole-surgery only).

      And then there is this funny thing that perfectly acceptable code bases age without a change to their code. Idiomatic C++ code from the 90ies is from our perspective not clean, even if the person who wrote it was a dedicated and smart programmer following the best practices that they could get a hold on.

      With functional code bases you might see similar patterns. A Haskell person of today may look at an older common lisp code base with similar reservations as a java programmer to the visual basic 3 app.

      I just sat down and wondered how a tech stack potentially used by a startup could age. In the future i think we will see much more dependency problems. When you get to maintain a Django/nodejs/ruby on rails application, you always take pypi/rubygems servers for granted (or your local mirror on artifactory). Think about the time in 40 years when the dependencies are not available. Or the small languages we sometimes see and still are able to find tutorials for, how will it feel to take over a Lua codebase in 40 years? I hope that enough docs stick around, but already when I browse the web on Smalltalk stuff most links are broken because actually information can disappear from the web.

      Or database technology. How will that NoSQL db appear to a maintainer in 30 years?

      So while today we are unhappily having to maintain software that was written without version control, it may be that future generations will have it even worse because they dont even have a monolithic code base in front of them but something with dependnecies they cannot install anew anymore.

    • deanmoriarty 1989 days ago
      Linux kernel is massive and has been around for decades now and it’s probably millions of lines.

      It certainly has a lot of cognitive overhead since you need to always keep in mind the context in which the code you are writing will be running (to reason about concurrency etc.), but it’s relatively easy to understand and well written.

      • yjftsjthsd-h 1988 days ago
        Yeah, I think some FOSS projects qualify; I'd say OpendBSD.
        • brohee 1988 days ago
          OpenBSD has a very sane culture in which deleting code, or reformatting code without the compiler object change, are good things.

          Most organizations don't see the value in that.

    • perlgeek 1988 days ago
      It's mostly a function of the effort put into the quality of the code base.

      Currently, I work (among other things) on a 25 year old code base that acts as an interactive, terminal-based interface for a CMDB.

      It has its problems (like mostly not using the exception mechanism of the programming language it's implemented in; probably wasn't very reliable back then, or maybe didn't exist), but all in all it's OK to work with. Most changes touch only 1 to 3 files, most of which are pretty short (<100 lines of code, typically).

      There are "here be dragon" areas, but not too many.

    • RSZC 1989 days ago
      This site/paper isn't actually (entirely) in jest and tries to explore that exact question: http://www.laputan.org/mud/
    • wvenable 1988 days ago
      > I suspect (but have no real evidence... yet) that SOME of this spaghetti garbage is due to the traits of procedural, OOP, mutable languages.

      It's comfortable to blame the tools but in my experience it's a people problem, not a tool problem. I've looked back on my own code from years ago and wondered what I was thinking! I'm currently rewriting one of my own projects because it was done in a hurry with the requirements half-specified and my heart was not in it.

      But I've seen things from other people, insane twists of logic that can hardly be imagined.

      One of the projects I don't work on originates from the 80's. There is tons of actual code from the 80's in this product. It has a Windows GUI. It has a web interface. It started out on Unix at some point but now runs on Windows. It's written in a language that doesn't exist anymore. It costs millions of dollars. OOP vs. Functional is not really the question.

    • ilovetux 1988 days ago
      https://en.wikipedia.org/wiki/Software_entropy

      Edit: I actually like the following article much better, found it in the See Also section of the link above. https://en.wikipedia.org/wiki/Software_rot

    • mruts 1988 days ago
      I worked on a about a million line Scala codebase a couple years ago. It started it's life in 2007. The code wasn't the greatest, but because all developers throughout it's lifetime had been militant about functional programming, refactoring it wasn't too difficult. When everything is referentially transparent, you can fix problems pretty easily.
  • api 1988 days ago
    The worst I've personally seen was a many tens of millions of lines C++ Qt GUI app. Won't mention the name to protect the guilty but it's something some people here might recognize.

    It didn't really need to be that big. A lot of its size was a result of pathological over-engineering. Apparently a previous (of course) engineer who built much of the original app thought boost::bind and functional binding was awesome. He also absolutely loved template meta-programming.

    The code base was full of templates that contained templates that contained templates that contained... layers upon layers upon layers of templates, generics, bind, and so on. The horror. The compile times were awesome too. Local builds on an 8-core machine took 15-20 minutes.

    I once made a commit to that code base where I replaced two entire subdirectories of files and tens of thousands of lines of code with one function. It took me weeks to understand and then finally to realize that none of it was necessary at all. It was all over-engineering. The function contained a case statement that did the job of layers of boost::bind and other cruft.

    I definitely had a net negative line count at that job. The experience helped to solidify my loathing of unnecessary complexity.

    It also made me respect languages like Go that purposely do not give the programmer tools like complex templating systems, dynamic language syntax, etc. It's not that these features have no uses, but honestly their uses are few. I've used Go for a while now and have found maybe two instances in tens of thousands of lines where I missed generics. I imagine I'd miss operator overloading in heavy math code, but that's about it. The problem is that these features are dangerous in the hands of "insufficiently lazy" programmers that love to over-engineer. I'd rather not have them and have to kludge just a little than to deal with code bases like the one I described above ever again.

  • gargravarr 1988 days ago
    I never actually saw the code that makes this work (thank Christ) but at the previous place I worked, we did everything in SQL. Believing SQL Agent to be 'too limited', my boss had taken it upon himself to reimplement it... in SQL...

    Just let that sink in for a moment.

    There were 20+ tables all modelled after how SQL Agent does its own scheduling, 30+ stored procedures for interacting with it, and it was all intended for use with a GUI that was (naturally) never written. A relationship diagram of the tables sat on my personal Wall of Shame board for most of the time I was there. And yes, I had to use it from time to time. It was installed in every database for 200+ clients, in production, UAT, development, you name it.

    The most delicious irony is, the only way such a thing could run automatically... was to use a SQL Agent job to poll it once a minute.

    • gargravarr 1988 days ago
      And yes, I'm aware you can programmatically create SQL Agent jobs in SQL, so he couldn't even use that excuse. The only thing it had over SQL Agent was that it could be 'event driven'. All that ultimately meant was that the job would only run if a condition was met.

      Yeah...

  • poulsbohemian 1989 days ago
    As many experienced commenters below have already noted, nearly any piece of substantial, revenue-generating, long-lived code will ascend to a place full of dragons and mysticism. The more interesting debates revolve around:

    1) How did it get that way, and what can we learn from that? 2) Is it inevitable that all software will end up like that? 3) How can an organization ever successfully sunset or move to a more maintainable system? Or should they even aspire to that?

    • spc476 1988 days ago
      Code that is easy to change is changed until it's no longer easy to change.

      Also: Well-designed components are easy to replace. Eventually, they will be replaced by ones that are not so easy to replace. -- Sustrik's Law

      Edit: Added Sustrick's Law

    • nathan_long 1989 days ago
      https://www.reuters.com/article/us-usa-banks-cobol/banks-scr... describe banks struggling to maintain ancient COBOL systems as the developers who understand them are dying off.

      I don't know exactly what they should have done and when, but it seems like rewrites are going to be necessary, and it sure would be nice to start rewriting a system while you still have someone who can explain what it does.

      • poulsbohemian 1989 days ago
        In the post-Y2K years, I heard a lot about how the next order of business was going to be to replace all those systems in health care, banking, etc, especially as a generation went into retirement. But, here it is 2018 and there are still articles Food for thought.
    • Axsuul 1989 days ago
  • bogdanu 1989 days ago
    My worst experience was with a PHP app (no framework), 60K LOC per file, 6 dimension arrays, variables and comments in 3 different languages, elseif statements spread over 2-3k LOC, no kind of separation of concerns (HTML mixed with php, db calls, etc).

    It was a 3 months nightmare with an arrogant as fudge client/developer.

  • bayindirh 1988 days ago
    I have seen a complete project. The project was maintained by one person. Former developer left and transferred his work to the next.

    It was a C project, but the second developer was coming from a Java background. So his half is all written with Java notation and naming conventions.

    The thing was maths-heavy and had no comments. Instead they were consulting to a documentation book which also had file names and approximated line numbers.

    The error tracing was done with function call stacks, so every function pushed some data into a global stack before a critical operation and, popped the same data if everything went without any problems.

    Some libraries they were using were ad-hoc patched to taste, and grafted into the code tree. So, debugging was 5x harder and longer.

    The functions were not divided into headers in a logical way, they were ad-hoc. So finding something was dependent on an source indexer or a grep sprint.

    Last but not the least, the developer was so arrogant that the bugs were resolved by threatening to force him to clone his code and sent it out to another developer to debug and clean. Otherwise he pretended that the bugs were not present, because it was running on his test bed.

    ... and yes. It was in production.

  • bungie4 1988 days ago
    A nested if statement approximately 75 levels deep all because the author didn't understand that an ID can be unique. So he manually checked the value (which meant it could never be changed without a code change).

    He did't understand the concept of a join. So he'd nest queries in VBScript with join key supplied from the outer query to the inner. Row by row. Essentially, a manual cursor.

    Same programmer wrote an ASP portal app. The login of which got most of its security because they didn't know how to iterate over a returned dataset. Same code would set a cookie for access IN THE PRESENCE of a password. It could be wrong, you would still get access. Worse, the logout function didn't delete the access cookie, it just redirected you to the login page. Meaning you could impersonate anybody if knew thier username. Included admin.

    I once corrected bug, by using a view. I sent him the view. He had no concept what a view was "That's like a stored procedure right?'. I'm shocked he knew what a stored procedure was.

    He's still in business and the software is deployed worldwide. He refuses to fix it. He's a multi-millionaire.

  • mixmastamyk 1988 days ago
    Worked on a file-sync system written in Python 2.6/7 by B-players in a hurry.

    Lots of microservices, before it was cool, that was fine. Shitty code everywhere, commented code, dead code, 5 blank lines here and there. Many lines over 100 chars, lots pushing 150 or 200+. Didn't understand how to use argparse or logging but tried. Crazy mixed-case WTFExtremelyLongSillyNamesEverwhereSendThis at the command-line interface, instead of verbs like send.

    Had a custom ORM that didn't want to look like one and took 5 times the code to do similar things. Little handling of exceptions, things like wrong permissions or IO like tar file creation might cause a 6 hour job to crash.

    Daemons couldn't be shutdown gracefully, had to tail their logs until they paused for a moment, cross fingers, and kill process, often kill -9. Old daemontools made it more difficult. Bad timing could mean you are in for 3 to 6 hours of manual job cleanup work. Would happen a few times a week anyway, cutting into dev time. Still you could count on it to work about 90% of the time.

    Token test suite and docs. Embarrassing web interface that would look amateurish in the 90's. Original developer made us do a standup everyday at 10:30am just when getting into the zone. They felt worthless for a while, and then it dawned on me why, we were all working on different projects.

    The punchline: spent three months of 60 hour weeks taking out the trash, writing tests, paying down debt. Spent the next month or two with another dev designing/writing a vastly improved V2 with graceful shutdown, Django-style ORM, and quality as headline features.

    A few weeks before we're about to knock it out of the park and deliver, the old author of V1 comes back in a panic, says we need to finish at end of month as a huge project is finishing. Doesn't seem to make sense, big changes at end are a bad idea. Takes over control of project, designs/implements V1.1, pushing aside our improvements and whips it up in a few weeks while I sit there with nothing to do. 6 months work of 60 hour weeks flushed down toilet.

    After picking jaw off floor and offering a few choice words I left the job by mutual agreement a few weeks later and didn't look back. Good times.

    • dcow 1988 days ago
      My only code style rule is don’t write lines more than 100 char which usually means “one statement per line”. It floors me how people think it’s acceptable to write code that you have to scroll not one but two axises to try and comprehend. Reading code should be like reading a book. And forcing yourself to write code that grows vertically instead of horrizontally usually solves all the problems and more that people dumping countless hours into auto-formatters are trying to grapple with.
      • mixmastamyk 1988 days ago
        He'd code fullscreen with a 16:9 monitor, so felt "it's not a problem."
  • walrus01 1989 days ago
    Not exactly code but people would be scared if they found out how much of the Internet, at OSI layers 1, 2 and 3, is held together by the metaphorical equivalent of duct tape and twine. At OSI layer 1, sometimes almost literally. Major ISP outages that have been caused by somebody tripping over the $5 power strip that was powering the authoritative nameserver.
  • mikekchar 1989 days ago
    I worked on the DMS switch in Nortel (if PRI broke for you, I'm sorry, I tried my best!). 31 millions lines of absolutely horrific code. You'd think somewhere in that mess there would be some redeeming code, but if there was I never found it.

    To give you some examples, I originally came on as a contractor because they had some refactoring they wanted done. The entire system was home built (including the programming language) and there was a file size limit of 32,767 lines. They had many functions that were approaching this limit and they didn't know what to do, so they hired me. Probably you can imagine what I did.

    One time I went to a code review. They were writing a lot of data into some pointers. I asked, "Where do you allocate the memory"? The response was, "We don't have to allocate memory. We ran it in the lab and it didn't crash, proving that allocating memory is a waste of time". No matter how much I tried reasoning with them, I couldn't convince them. The code shipped like that.

    One of my more amusing anecdotes is that when I worked there the release life cycle was five years long. The developers would work on features for 3 years. The developers were responsible for testing that their own code worked. There was no QA. After 3 years, we would ship the code to the telcos (telephone companies) and they would test it for acceptance for 2 years. We would fix the bugs that they found.

    I started working there at the end of a release cycle, so people were only fixing bugs. I got an interesting bug in that I couldn't find any code that implemented the feature. The feature had apparently been implemented at the beginning of the cycle (so around 4 years before), by someone who was now my C level manager. I started looking at the other features that person had implemented. There was no code. It seems that this enterprising person had started work and realised that nobody would check his code for 3 whole years. He just checked off all his work as done without actually doing anything. Since he was an order of magnitude faster than everybody else, he was instantly promoted into management. When I reported my findings to my manager, he made it clear I wasn't to tell anybody else ;-)

    Such a messed up place. But the switch worked! It had an audit process that went around in the background fixing up the state of all the processes that ended up in weird states. In fact, when I worked there, nobody I worked with knew how to programmatically hang up a call. If you were using a feature like 3 way call, etc, they would just leave one side up. Within 3 minutes, the audit process would come by and hang up the phone. Tones features "worked" that way -- by putting the call into weird states and waiting for the audit process to put it back again. You could often hang up after a 3 way call, pick up the phone and still be connected to the call.

    Most people don't know it, but because of some strangeness with some of the protocols, telcos used to "ring" their main switches with Nortel DMS switches. This would essentially fix the protocols so that everything could talk to everything. So, if you ever made a long distance telephone call 20 or 30 years ago, it almost certainly went through a DMS switch. The damn thing worked. Somehow. I have no idea how, though ;-)

    • nobody271 1989 days ago
      This one might take the cake. Thanks for sharing.
      • mooreds 1988 days ago
        I agree. It's like a buffet. Multiple stories, each frightening in their own way.
    • Aloha 1988 days ago
      If you've made a long distance call at any point in the last 20 years (including today) you have decent odds that the call went thru a DMS.
      • mikekchar 1988 days ago
        It's been decades since I worked on telephones. That scares me a bit :-D
  • stabbles 1989 days ago
    Nobody mentioned Wordpress yet? Last time I checked it was a terrible pile of procedural code
    • Endy 1988 days ago
      Without a doubt. I've run into so many little "glitches" in WP that it really soured me to CMS backed websites. The next site I run is going to be 100% hand-coded.

      ... and it'll probably load a lot faster than the PHP+CSS nightmare that is WP.

  • agentultra 1989 days ago
    A web application written in C++ before there was the STL or many libraries written for C++... in 2007 or so. It was probably 200k LOC or so of CGI application code, libraries to render PDFs, manage a filesystem of XML files, an XML parser... none of it documented or tested.

    The developer who was leaving the company dropped a copy of Michael Feathers' Working Effectively with Legacy Code. There was a small amount of Python to wrap the API to the C++ code using Boost against which a small suite of unit and functional tests were being developed. I learned a lot on that project.

    I never fully understood how the C++ code all worked but having that API interface in Python helped to grok parts of it (and eventually replace it with a few hundred lines of Python code at a time).

  • billwear 1989 days ago
    Actually, if you want to see clean code that works well, try the traditional ex/vi source. It's something like 20K lines, but very neat functional programming all the way down.
    • jackalo 1989 days ago
      I just want you to know that through rabbit holing 'traditional ex/vi source', the following happened...

      1) Read a Wikipedia page for vi.

      2) Noticed that Bill Joy used a Lear Siegler ADM-3A terminal.

      3) Discovered that Lear Siegler was the result of a merger between Siegler Corporation and Lear, Inc.

      4) I know that Lear, Inc. was founded by William 'Bill' Lear.

      So, not only do we have Bill Lear to thank in a way for vi, but your username is oddly familiar...

  • pacoWebConsult 1989 days ago
    12k line code-behind for an ASP.NET page that duplicates PDF files, driven off of a spreadsheet, and adds some additional information at the top of each page. This has generated the majority of a major state's ballots for at least the past 4 elections.
  • andyhasit 1985 days ago
    I might have a slightly different record, if you count lines in a single source file...

    One client wanted to hire me to continue developing his AutoCAD plugin written in AutoLisp (a cut down version of lisp for writing macros in AutoCAD). He had all his code in a single file which was around 63,000 lines long. (I calculated this as 504 meters of screen/paper top to bottom).

    This was for a product that was in production, used commercially, and he'd been going for 15 years, adding bits as he went. There were loops hitting 300 lines, well over a hundred global variables, no consistency, and duplication like you wouldn't believe because he hadn't grasped the idea of moving code out to reusable subroutines.

    I asked him what he did when his clients found a bug. The answer was "knuckle down for a few weeks".

    I just couldn't believe that this was his daily work for 15 years, and he never thought to learn anything about programming other than what was immediately required to solve the problem in front of him...

    Another unbelievable part of this, is that AutoCAD has an absolutely superb IDE built into it (i.e. it's fricking free!) which has features like the ability select & run code in current scope with minimal clicks which make for the fastest development environment I have ever played with, as in, you have no idea how much functionality you can churn out in a day. But he didn't like it, so he edited his code in....wait for it... WordPad! (That's a Windows built in Rich text editor where double clicking on an underscored_word only selects up to the underscore because it's not made for code, and that on its own makes it impossible to work with).

    I tried to modify his code for several hours but had to stop myself because it was madness. So I told him the only way forward would be to rewrite everything from scratch after which point I could make it do anything he wanted. I reckoned it would only take 4 weeks to rebuild his 15 years of mess - partly because the IDE makes development so damn quick, and was even willing to do it on a fixed price, but he declined, which I think was utter madness.

    The thing is, he had 15 high paying clients, and two other part time employees helping with other aspects of the business, and drove a brand new Audi A3, which means he probably holds a world record for highest ratio of money earned to quality of code written, at least in the CAD subcategory :-D

  • cat-turner 1988 days ago
    In my past life I was an oceanographer at a consulting company. Essentially we wrote Matlab code that processed data we get from ocean and meteorological models. Many of these models, used by NOAA, are all written in Fortran. While most were well documented, one thing you could not get around is the use of parameters for these models ( think of global variables used everywhere) and a never ending game of go-to statements. I had to figure out how this one flood model worked... literally had to read 5,000 lines of Fortran. This is what we use to determine potential flood impact from hurricanes, today. Software written for oceanography is a whole other beast.
  • gassiss 1988 days ago
    Three years ago I was hired as a Business Analyst, with 0 background in anything related to IT. Our team was responsible to maintain a legacy base in COBOL.

    As time went by, everything started to make sense and I was able to grasp almost everything about the environment and the tech stack. Even perform a little bit of system analysis in COBOL to try and identify some gaps in the code base.

    But what always intrigued me is that even those senior developers in the team with 25+ years of experience with COBOL wouldn't ever touch this one program. The program responsible for 90% of the logic of the product of that company. Every now and then, this program would ABEND for whatever reason and sometime, no one could figure out why. They respected (cute way of saying they were afraid) this program so much that, instead of refactoring it, they would just throw an if statement and let it run.

    This program was built in the 60s and had I don't know how many hundreds of thousands of lines of code. It is still running to this day.

    Now I don't have the expertise to say if that was bad code, and even if I had, I didn't deal with this program enough to say this anyway, but I was very intrigued why would this particular program ABEND out of nowhere, and no of these super experienced developers would have the guts to touch it.

  • dotdi 1988 days ago
    I was given the opportunity to work with iOS at one of my former employers and I quickly agreed (I was a backend developer back then). It was a rather popular application at a rather large Austrian media company, with around 50k unique monthly users.

    Boy did I regret that decision!

    - 100KLOC

    - Initial development outsourced to India. Comments, variable names in Indian.

    - Subsequent development outsourced to Belarus. Add comments, variable names in Russian.

    - "Why use ObjC OO features when we can write buggy and incoherent C"?

    - Global, implicit state everywhere. Tapped on something? Hope it didn't mess up the state you are relying on.

    - Obviously no tests, and no testers.

    - Inheritance chains of up to 20 classes.

    - iOS kindly forces MVC, but you can obviously write empty controllers and all spaghetti-logic into your views. Needless to say, that was what they did.

    - Complete lack of proper structure. Several views iterated up their parent controllers to the desired one, grabbed its views, iterated over them until the right view was found and something was done to that view.

    - Building and running the application was controlled by 10+ env variables (the other dev was fired after he pushed a dev build to the App store, which mysteriously passed review. "Whoopsie, forgot to set one of the env variables correctly").

    - about 80% of the logic was copy-pasted for the iPad build instead of reusing anything. It was not a different target, it was a separate project.

    • charlesdm 1988 days ago
      You were "given the opportunity" huh? :)

      How long did you work on that before you threw in the towel?

      • dotdi 1988 days ago
        About six months. I managed to actually get a new feature into that hot mess while fixing bugs that would take days to weeks to track and handling compatibility with a new iOS major version, but then I got an offer to work in the space industry and quickly left.

        A while later I heard they threw everything away and started an in-house rewrite in Swift. Some people do seem to learn from their mistakes.

        EDIT: clarification on the opening statement.

  • wbsun 1988 days ago
    Two case:

    1. Millions lines of messy Java code with annotations and dependency-injection form a multi-shard distributed job with tens of distributed downstream backends, some of which provide fake always-success synchronous RPCs hiding the fact that the underlying operations are actually asynchronous and may fail very often. What makes it worse to work on these code is it was built with a single transactional underlying database but then due to reliability issues of the single database, data are now across at least three different transactional databases. This causes endless race conditions and concurrency issues to fire everywhere. The production release of this distributed job used to be once every week, now multi-months is normal, and quarter rollback is not a surprise to people.

    2. Almost million lines of messy C++ code with several .cc/.cpp files containing tens of thousands C++ code, some class implementations are across multiple .cc/.cpp files. I have always been scared when touching some of these giant .cc/.cpp files. People who have been working on the code for years can still easily make ignorant mistakes when adding/modifying a small feature (with so-called fully unit-test overage of course). There are multi-million lines of testing code, which is almost 10x of the code be tested, but most of them are bogus and almost test nothing, hence silly mistakes are everywhere, everyday. Even the original author of the code base needs 5 follow-up fixes in order to make a 10-line behavior change work.

    For both code bases and the jobs running these code, people are now talking about breaking them into microservices, by converting function calls in the existing code bases into RPCs. I can foresee a tremendous number of service outages are coming...

  • a-dub 1989 days ago
    NaN kloc.

    Nobody could even describe definitively where all the code could be found in version control. The best part was that nobody trusted version control either, so even if you did find something that looked relevant, there was a good chance it was dead and only served the purpose of confusing you or forcing you to memorize unstructured structure just to be able to get around.

    It was madness inducing.

  • shadoxx 1988 days ago
    Here's a story I haven't shared in awhile.

    One of my first gigs actually getting paid to code was getting hired on as a last ditch effort to save what was (unknown to me at the time) a failing business. It was a company that basically relisted real-estate auctions on their own site, coded entirely in PHP, by a single developer who had read "How to Code PHP in 24 Hours". They had one client that was basically keeping them afloat. It was so disorganized that at one point I was tasked with making a quick YouTube commercial in Adobe Premier for a client, showing off the features of our whitelabel product with their logos, edited from a stock template. I do not know Adobe Premier. I digress.

    The main PHP file (yes) was 20,000 lines of code. Want to add a new feature? Copy that file into a new file and save it as newfeature.php. Database operations weren't transactional, there was no change management, and for about a week we were using production systems to code until development environments were made for us.

    There was other shady stuff going on too, like using over a hundred proxy accounts to scrape content from other listing sites. I refused to touch or even look at that logic in the codebase, and it was always talked about in kind of a hushed way. I was young, didn't know any better, would nope the eff out if a similar opportunity came along at this stage in my career.

    They folded shortly after laying off pretty much everyone but the CEO and the lone coder. Dumpsterfire would be an understatement, but my coworkers were chill and helped make the best out of a bad situation.

    EDIT: Oh yeah! I forgot to mention the hardcoded password the was site wide that we used as a sort of "impersonation" feature. You could type in any user account and this password, and it would log you in no problem. No, we did not have auditing controls.

  • rageagainst20 1988 days ago
    When I took over my current job a outsourcing company who employed developers based in Islamabad (Pakistan) wrote most of the code. It's C#, they didn't understand the concept of null, by ref or by value.

    It was interesting. The database was Mongo and they had some horrible Entity Framework port that they forked on Github and used which wasn't maintained and at the time in 2015, 3 years old.

    The data layer had business logic. Let's say they were getting a user record from the database but they didn't want to include the first and last name they'd write a method called

    GetUserWithoutFirstAndLastName();

    That would be in the User repository. Another requirement would come up to get the user but not include the user's language for instance and they'd create another method

    GetUserWithoutFirstAndLastNameAndLanguage();

    Ended up with about 70 or so methods which basically gave different levels of hydration for the User object.

    The frontend was written in Extjs which took about 25 seconds to load.

    They had no tests.

    • abledon 1988 days ago
      C# 8 just announced optional nullability as a feature
  • jancsika 1988 days ago
    If you search youtube for something like "pure data dance music" you'll find some excellent spaghetti diagrams that produce music/video.

    Since the environment provides realtime feedback, it's a drag to try to refactor the current diagram into a reusable abstraction. Instead, many users optimize their creative time and just keep adding functionality to the diagram in a single graphical window. By the time they are done there is text overlapping other text and a bunch of lines obscuring most of the diagram.

    Even with the ones that have a simple set of controls for a sequencer or whatever, there is usually a "guts" module that hides all the spaghetti.

    Also, for any running program you can instantly make it twice as spaghetti by doing "select all" and "duplicate." :)

  • jtolmar 1988 days ago
    All code is bad and Google puts almost everything in one repository, so I'm going to say that ;-).

    For more traditionally defined bad code, I worked with a team that rewrote an existing service from scratch using with 40000 lines of spaghetti. The service that it replaced was 80000 lines, and was replaced despite working perfectly fine because it was a total clusterfuck with dependencies interwoven across every service anyone had ever heard of. This was a payments system for a major online retailer. All of this code has been removed since.

    I also used to participate in Java4k, a competition to make games fit in 4 kilobyte jar files. Writing almost everything inside a while loop in a single function is basically table stakes for that.

    • geezerjay 1988 days ago
      > I worked with a team that rewrote an existing service from scratch using with 40000 lines of spaghetti. The service that it replaced was 80000 lines, and was replaced despite working perfectly fine because it was a total clusterfuck with dependencies interwoven across every service anyone had ever heard of.

      Your post should be at the top of the thread. Rewriting projects from scratch tend to do more harm than good, and the only reason this problem isn't addressed very often os that the people invested in reinventing the wheel don't admit to having caused more problems than they solved, and the ones who developed the old systems have moved on and thus are unable to say anything in their defense.

  • ryanmarsh 1989 days ago
    Any large company. I’m being serious. As a development coach I see a lot of code and I must say the DEFAULT is “how does any of this even work?”
    • TideAd 1989 days ago
      I've been working on fixing some old C++ code at my company so that it compiles with the most recent version of the compiler. To my surprise, the errors that the new compiler gives me are very reasonable. But I always ask myself, "how on earth did this ever compile in the first place?"

      The crazy part is going into the commit logs and seeing that a lot of the people who wrote this have been very successful. They've become vice presidents, retired rich, and written influential papers.

      • flukus 1988 days ago
        I've been doing the same, porting (getting to compile and seemingly run) an old c and c++ codebase from 32bit solaris to 64bit linux. For a while I was spending my days alternating between "how did this ever compile" and "how did this ever run" when debugging. It took a lot of modifications to get it to compile on a now 6 year old version of GCC, then I got to debug a lot of issues with type sizes, endianess and posix differences. The team responsible for it is still in favor of keeping it 32bit unfortunately, so we can't even do a desperately needed valgrind run. I wish I could look at the history but I don't have access to the clear case repository I think they're still using.

        We've got another codebase of similar vintage that is preventing us from deploying spectre patches...

  • baron816 1989 days ago
    I’d like to hear people describe why the code they’ve seen is bad.
    • mikekchar 1989 days ago
      In my experience (despite what you might imagine) the biggest factor has been lack of experience for the developers. You can get some pretty weird ideas when you first start out. Those weird ideas can gain traction in a group and then it becomes "the way" to do it. Over time, the system gains cruft and there is no practical way of addressing the problems. It takes really experienced people who specialise in legacy code to systematically improve large systems -- and it takes a donkey's age. It's just not a sustainable enterprise.

      We tend to think that harsh deadlines and unthinking managers are to blame, but writing good code over the long term is incredibly difficult. Group dynamics are hard to deal with and as you add (or replace) people on the team, you are bound to eventually go off the rails even if you started off well. Which is not to say that you shouldn't try, but our industry is really immature. Most developers are quite young and even when you have a couple of older people on the team, they may or may not have the skills for long term design evolution. Maybe 50 years from now it will be the norm to write good code, but I think we'll still be writing legacy messes for a while.

      I should point out that even the worst code I've seen written this decade is at least an order of magnitude better than the average code I saw when I started my career. As an industry we are improving!

    • natalyarostova 1989 days ago
      The core tech lead on our team switched projects, new engineers were assigned to our projects and given tight deadlines, as well as being responsible for giving each other CRs. Our once well maintained code base began to fall apart, and I realized keeping it clean wasn't worth burning myself out. This is also my first real job working on/with software, so I suppose this is normal?
  • JoeAltmaier 1988 days ago
    A 10,000-line C++ module for transmitting hundreds of different packets to a radio module from the driver, all cut-pasted copies of the same code with different structure name. Bugridden (bad copy-paste-edit bugs).

    I replaced it with an 11-line template.

  • clausok 1988 days ago
    I worked on an Excel\vba swat team at an investment bank. Whenever someone in the various business units got themselves into more trouble than they could handle with an Excel model, we'd try to set things right. We saw a lot of wonderfully creative vba code from industrious programming beginners (or, as we liked to call them, "Macro Recording Cowboys". The most amusing I saw was a model where the vba code used a Greek mythology variable naming convention:

    Dim Hermes As String, Artemis As Long, Odin As Range

  • apnkv 1988 days ago
    A few weeks ago I had to reproduce one of the recent papers on deep learning theory. It had ~3000 lines of code over around 10 files, but when I just removed stale and unused calls and spaghetti (just like static analysis does), it reduced to ~1000 lines with identical functionality.

    Original code also computed the most time-consuming routine twice to thrice each run and had many many typos, e. g. "calculte_infomration". Imagine this, but in every second function/variable.

    However, it worked.

  • YeGoblynQueenne 1988 days ago
    There should be an award. Like the Turing award, but not quite.

    The trophy should be a plate of spaghetti, in gold.

    • kieckerjan 1988 days ago
      Hell yeah! Like the Golden Raspberry Awards or the Bad Sex Awards, but, you know, for code!
  • osrec 1988 days ago
    I worked in investment banking as a quant. Once had the misfortune of working with a regulator-approved custom built market risk system from the 90s. It had code in every language you can imagine, from cryptic Perl, to Java, to C. The thing worked, but development cycles were a year long, primarily to allow for regression testing. It was a monster to debug, as very few people have the brain space to simultaneously comprehend that many moving parts.
  • wernsey 1988 days ago
    My first job out of university was to program IVR systems for a telecom company using an in-house developed framework.

    An IVR application would typically play a recording to the user, then wait for the DTMF signal, then maybe take the user to a sub-menu, prompt the user to choose another option and do a database query and then play another recording and so on.

    The IVRs had to repeat a menu if the user made an invalid selection, and "press 'star' to return to the main menu" and so on.

    So most of the applications I maintained were thousand line C++ functions that looked something like this (paraphrased):

      void ivr_main(int ch) {
      main_menu:
        play("mainmenu.vox");
        d = get_dtmf(ch);
        if(user_hung_up(ch)) return;
        switch(d) {
        case '1' : goto menu_1;
        case '2' : goto menu_2;
        case '3' : goto menu_3;
        case '4' : goto menu_4; 
        }
        goto main_menu;
      menu_1:
        play("menu1.vox");
        d = get_dtmf(ch);
        if(user_hung_up(ch)) return;
        switch(d) {
        case '1': goto menu_1_1;
        case '2': goto menu_1_2;
        case '*': goto main_menu;
        }
        goto menu_1;
      // etc...
      }
  • Latteland 1989 days ago
    After many years as a software engineer, I can only say that every large old application is a pile of messy junk. Whether it's database software at Microsoft, visualization software at (another company), a database at a third company, another database at a 4th company, another database at a 5th company, they are all filled with junk and crap.
  • joddystreet 1988 days ago
    The one I am currently working with - Multiple calls to the database (a lot of times in a loop) - Rather than reference, or variable passing, DB id is passed as a parameter to functions, so multiple calls to the database for a single object per API call - Database design was normalized initially, for "optimization" most of the database has been denormalized. - Queries without index - If we were to create an index, probably have to create index on every column - one of the function has more than 36 if else (including nested if else) - Does not work with database partitioned or multiple databases - OLAP like queries running wild on OLTP systems, just because we can - Fastest API call - 1 second, slowest - <i dare no say> It started as an outsourced project and the same codebase is being dragged on, with features being added every day and with no coherent plan.
  • boomlinde 1988 days ago
    I've been patching a configuration user interface web application that uses the same codebase for a bunch of different product categories. Some 40 files totaling 15000 lines of HTML/js combining filepp preprocessing (a C-like build time pre-processor) and SSI (server-side includes and conditional rendering with directives snuck into HTML comments) split over several repositories that are all tightly interdependent. Adding to that, obviously no one wants to claim ownership over this codebase and changes are frequently made by people that understandably don't know exactly what they're doing and reviewed by other people shooting from their hips. Any change to it has to be considered for every product that uses it, which I've never seen a full list of.

    Not too many LOC but the badness/LOC ratio is terribly high.

  • cgh 1989 days ago
    Microsoft Biztalk. By "work", I mean it didn't crash. It also didn't do anything useful.
    • arethuza 1989 days ago
      If SAP is how Lucifer interacts with our world then Biztalk is how Cthulhu seeks to enter our world by contaminating the minds of innocents with its eldritch horrors.

      Mind you, it may have got better than my experiences with it, which as you can probably guess weren't very positive.

  • dapreja 1988 days ago
    ADP Retirement Services first generation of PES (Plan Entry System) program. It was/is an internal tool with all the horror stories that you hear about internal tools. The people who made it wrote obscure logic with encrypted variable naming with the main purpose of having job security, it's a core tool in their business model but treated like gutter trash, built with visual basic and embedded angular code to make it look more modern, the supposed rewrite talk went on for 8 years, and to put icing on the top it was QAd/maintained by underqualified offshore old-school contractors (you cant even call this group devs/engineers).
  • joecool1029 1989 days ago
    The leaked Windows source some years back was pretty damn ugly.

    And it was only a small portion of the OS.

  • btbuildem 1988 days ago
    Microsoft Word back in '99 -- can't really estimate the amount of code, I didn't have the experience to handle something like that. What I saw of it and what I worked on, that alone made me reconsider my career path.
  • andrewf 1989 days ago
    If Foo1.php, Foo2.php and Foo3.php are all copies of the 1000-line Foo.php, each with a different smattering of bug fixes, each included by a different 10 - 20% of the overall codebase... am I allowed to count all 4 separately?
  • rocfreddy 1988 days ago
    I worked for a cell phone company. They had a large piece of software that is used by all customer-facing employees to manage accounts (billing, network, provisioning). Over its life, the software had been rewritten in many languages. Prior to my joining the team, someone thought to take a .NET application and rebuild it as a web application (Spring MVC and ExtJS4).

    The first day on the team, before even looking at the code base, I ask our development lead what unit testing framework we are using for the Java code and what we are using for the front end code. He gives me this funny look and tells me to speak the guy onboarding me. I of course go and ask the lead onboarding me and he gives me an answer that turned my world upside down.

    "Automated testing is a waste of time. You will spend too much time writing test then developing code and delivering stories to the business"

    I need to point out that this guy eventually goes on to an executive level position. proving that it is not the quality of work you deliver, but the optics of delivering quality software that counts in large corporations. I digress.

    I receive my SVN credentials later that day and come to the realization that I have made a very poor career choice. The front end code alone is over 5k Javascript files with functions spanning thousands of lines long all full with 100's of nested asynchronous callbacks. Not only is the code crap, but the tools that we had were not used correctly. For example comments.

    //01/01/2018 - Fix bug - Begin //01/01/2018 - Fix bug - End

    The code base was riddled with these. What story was this for? would have been useful to pull the Jira story and see what you were trying to do. I guess I'll spend the day going through your 5k line function that does everything, but nothing. Or, better yet, let me check the SVN commit date. Maybe they committed some good notes for me there.

    01/01/2018 - Fix bug

    Damnit!

    I did my year, which is the minimum you can be in a job before posting out. I left without a second thought. When I left the business was complaining because the testing team was larger than the development team. I wonder why?

  • INTPenis 1988 days ago
    Bad code is a relative term. It might look awful but the design might be decent and of course it works or it wouldn't qualify.

    I'm maintaining two semi-large applications right now that I wish I had the time to fix.

    The bad stuff is basically no documentation, no attempt at pep8 compliance, written before decorators existed in python and very dependent on Python 2 syntax.

    But it works and it's actually a very good distributed monitoring system that was ahead of its time. Closed source but it would remind people of prometheus if it was released, yet it was several years ahead of that product.

  • geekbird 1987 days ago
    The ugliest, most incomprehensible code I've ever see was when I had to mess with RedHat's version of Anaconda (for building distros) back in the early 2000s. It was worse than the multi-thousand line TCL screen-scraper that I had to occasionally fix in 1999. It was the first code I'd even seen written in Python. You couldn't really trace anything, it was all so nested and idiomatic. It made the Perl code I inherited from a long-time Perl guru downright comprehensible. I've looked askance at Python since.
  • blobmarley 1988 days ago
    Had to maintain a web code from the pre JSP era,i.e, servlets and writing UI code in Java classes (think writing html in s.o.p). The beauty of it all was that there were only two classes for the entire application: one was the servlet, and the other held all the UI code for all the pages. Requests bounced between the both of them, held together and dictated by a series of "if" conditions checking for "string equals" in a method with 50 variables, all strings, named s, s1, s2..s49. And no comments anywhere.
  • zie1ony 1989 days ago
    Magento.
    • ttoinou 1989 days ago
      Now I understand why Adobe bought them :D
    • zhengyi13 1989 days ago
      ... which was theoretically better than OSCommerce :/
      • leesec 1989 days ago
        ...or Websphere
  • h4y44 1988 days ago
    I was in an outsourcing team working for Panasonic door-phone system, the source code was big, but what my team 's done was to develop some more functionalities including video call. It was about ~20 KLOC written in C for the UI & stuff (what we were ordered to do, since the behind layer was not disclosed to us). The source code was a mess since it's written by low-experienced programmers and even intern students. Later I quit that job :)
  • thom 1988 days ago
    I once had to wade into a PHP app for a government procurement website that was just horrifyingly messy. I used to maintain records for the highest line number on which the opening <html> tag appeared in a template (well into the thousands) and the highest number of opening/closing PHP tags (hundreds in a single file). There was no structure beyond bunging stuff in the session and hoping it all hung together.

    The app ran on Mac servers.

  • tluyben2 1988 days ago
    A million+ lines of Java code for a product we (in a previous company) had; it started out nice and design pattern etc but after 5 years with ~200 different coders adding/changing things and no time to refactor, it became bad enough to trigger a phase out of the product and replacement by another product. I actually moved roles to figure out how to not make the same mistakes: those lessons work for them until today.
  • Walkman 1988 days ago
    Two projects with 300000 lines of Python code each at a small company handling physical access control systems with smartcards and stuff. Once I looked the most critical part; the logic which decides if a door should be opened or not and it was implemented with the assumption that in Python, the bitwise operators work the same way as in C (they doesn't at all).
  • Tade0 1989 days ago
    An Angular app with ~50k LOC.

    Barely any tests and several hand-rolled components, some of which were merged only because the author managed to implore somebody to give them an R+ after a few sprints of the branch just sitting there.

    A total of 21 people were involved in creating this system and for some insane reason we still had daily standups with almost full attendance.

    • dasmoth 1989 days ago
      What's the problem with "hand-rolled components"? Isn't that pretty much the essence of building non-trivial frontend stuff? Or does this have some special meaning in the Angular world?
      • Tade0 1989 days ago
        Not in the case of components which could easily be replaced with a well-tested and proven solution.

        Notable among the ones in this project was the date picker - there's a decent number of those available and yet somebody made the decision to hand-roll it.

        The result was a mess that for some reason had a 300 LOC service as a part of it. Needless to say minimal tests.

        It was a waste of man-days in a project that was already over budget.

        • rhinoceraptor 1988 days ago
          There are a lot of date pickers, however if you have a specific UI and UX to implement, it might not be worth trying wrangle one of them into doing what you need it to do.

          There's so many integration points (business logic, i18n/l10n, styling, keyboard navigation, accessibility, etc) that I think it's often easier to write your own. Hopefully now that most browsers support <input type="date" />, we can just use that most of the time.

  • DaveSapien 1988 days ago
    My codebase I'm working on right now. Went from a proof of concept demo, to small mini game, to a bigger mini game, to full game, to SAS product. With no time(budget) to rewrite it up until delivery to a customer. Its pretty frustrating, as its my fault, but budget and tight deadlines forced my hand.
  • bigredhdl 1989 days ago
    So since we are on this subject, does anyone have advice for wrangling spaghetti QT apps into submission? We have two of them here and of course their creators aren't here anymore and we find it very difficult for developers that weren't involved in their development to grok all the shared state.
  • icedchai 1988 days ago
    Server code written in C++ that called into a lower level C library. There was a 4000 line case statement that handled message replay. I am not sure why it wasn't broken out into functions, but it was "sensitive" code and refactoring it was considered a no go.
  • chad_strategic 1989 days ago
    Drupal 8
    • pmarreck 1989 days ago
      I did one project in Drupal. What a heaping pile of shit unfit for any actual developer and not someone's wife who moonlights as a small-business homepage developer. Never again.
    • chad_strategic 1989 days ago
      I never worked in Drupal 7, but I can't image it was much better.
      • lowry 1989 days ago
        Drupal 6 was actually a pretty consistent hooks-based system that exploited the fact that PHP has a fast and feature-rich implementation of the function lookup table.

        Drupal 7 started the migration to object-oriented code and was halfway through the messy rewrite when it was released. Drupal 8 finished where Drupal 7 left. That's where the majority of the developers left as well (pun intended).

  • UziTech 1986 days ago
    My first PHP app was an invoicing application. It is ~3500 lines of php in a single file. Most of which is a few heredocs with javascript to be in lined on the client side. I haven't touched the code in 10 years. I still use it for my freelance invoicing.
  • voldacar 1988 days ago
    As a budding programmer (hobbyist), what can I do not to fall into the mistakes others talk about in the comments? Is there any particular set of mental habits that could lessen the chance of spaghetti occurring, or is it just a matter of when rather than if?
    • sundarurfriend 1988 days ago
      In practice, it's more a matter of mentality and willingness, than any specific knowledge regarding code maintanence. All you need is a couple of articles or a single book about writing clean, maintainable code - after that, the majority of the benefit comes from simply trying to maintain the long term thinking and not getting into a "let's just get this frigging done and go for a beer" mentality. Yes, there's always more you can learn to better structure your code, but simply being aware and _wanting_ to write readable code consistently takes you above 75% of the code out there.
    • ijuhoor 1988 days ago
      All these comments are good. Also ALWAYS watch out for 'Quick Win': When you do something quick and dirty, mark it up for 'refactoring' (i.e. Tech Debt). When your list of quick and dirty builds up, go and clean it. Do not accumulate tech debt, try to keep it low otherwise you will pay full price later (examples up here).

      Another great tip that saved me from tons of refactoring: "The wrong abstraction is worse code duplication". Meaning sometimes duplication of code is better than trying to create the wrong architecture (as long as you mark it in your quick and dirty list).

      • superhuzza 1988 days ago
        You don't just pay full price on technical debt, you pay it back with interest!
    • perlgeek 1988 days ago
      The most important thing is to be on the lookout for spaghetti code (or other big chunks of hard-to-maintain code), and do something against it.

      There's lots of materials on how to refactor code (improving its structure without changing functionality), including blogs, books and video lectures.

    • adrianN 1988 days ago
      The best way to learn how not to write spaghetti code is working on spaghetti somebody else wrote. Every big project has some corner with sub-par code quality, so you could volunteer to clean something up, e.g. in Libreoffice or Firefox.
    • letientai299 1988 days ago
      Here's some rule I'm using to keep my code maintainable (to me):

      - Line should not be too long (120 max)

      - Function should fit into 1 page in the view port of screen, even when your console and debugger take 1/3 bottom part of the screen.

      - Variable name should be pronounceable and longer than 3 letters (to prevent name like `i`, `x`, `s`)

      - File should not be too long (1000 lines max).

      - Return/Exit/Throw as early as possible.

      - Comment:

          - If an `if` take more than 2 condition, it's worth commenting.
      
          - If a funciton is longer than 10 lines, it's worth commenting.
      
          - All file's worth commenting.
      
          - If you ask yourself "should I add comment", add comment.
    • burning_hamster 1988 days ago
      I think the best starting point is

      Robert Martin, Clean Code.

      Most people that have read the book (and it is a classic so many have) will swear by it. I most certainly do.

  • En_gr_Student 1989 days ago
    Windows Vista. It is practically a virus, and ran ~5x slower than XP on the same hardware.
    • arethuza 1989 days ago
      It wasn't even consistent - a colleague had weird errors with networking on Vista and I didn't even though we had identical laptops.

      Microsoft support was basically "yes it does that sometimes".

  • hnruss 1988 days ago
    Largest amount? 13k lines in a single Java file, none of which was boilerplate. Who knows how many lines were in the whole project... a million seems about right.
  • atilaneves 1988 days ago
    C functions that span over 10k lines with 46 parameters.
  • jedimastert 1989 days ago
    Not as big as some of the examples here, but still kind of interesting. I worked for a city government, and the entire backend code for our work-bids site was a few hundred files of PHP written with some sort of Dreamweaver framework that was deprecated years ago. Old enough that the entire site had to be run on PHP 5.7. I ended up rewriting the entire thing with Laravel (the framework of choice for the other webdev that was in the IT dept). It really didn't take that long; he just had too much on his plate to worry about it before I showed up.
  • spsrich 1989 days ago
    has to be itunes. What a bloated pile of crap that is.
    • samfriedman 1989 days ago
      any entertaining examples from your time working on it?
  • brianpgordon 1988 days ago
    Once upon a time, there was a search product and one of the data sources that it could search was a Solr/Lucene database. This should be no problem, since search is what Solr does. It should be as simple as passing the user's query through to Solr and then reading the response. The problem was, it was important to know exactly which parts of any matched records were relevant to the search.

    The Guy Before Me™ decided that the best way to implement this would be to split the user's search into individual words, perform a separate search query through Solr's HTTP API for each individual word, and then do a bunch of very clever and complex post-processing on the result sets to combine them into a single set of results.

    This led to endless headaches due to horrible performance. Imagine if you wanted to implement web search this way. How would you synthesize the results for the search "boston plumbers" given the search results for "boston" and the search results for "plumbers?" You would need tens of thousands of results for each search term to find even one match that applies to both terms. Now scale this to getting hundreds of results to present to the user. Now scale this to n search terms.

    I was tasked with making this take less than 8,000ms for a simple query. I spent a while getting to understand how this code worked and building out performance tests so that we could determine how it would behave under load (we didn't have any users yet). The results were pretty grim. I presented two possible options for moving forward:

    1. Move this crazy result-set-intersection logic closer to the data. I could build a custom Solr plugin to do this stuff inside the Solr server so that we didn't need to copy gigantic result sets across the network from Solr to the application server for every query.

    2. Delete ALL of this nonsense because literally exactly what this whole mess of code was meant to accomplish is already implemented in Solr. They call it highlighting. It's one of the marquee features of the program. I can't stress enough that this is precisely, perfectly, unequivocally, the exact thing that all of this complexity was meant to accomplish.

    My manager thought it would be a shame to throw away all of that very expensive code and lose the flexibility of an in-house solution. So we went with option one. I spent the next month writing a Solr plugin that reproduced the original logic. It was still slow as mud so I sharded the data across multiple Lucene servers and distributed the algorithm across them with a map/reduce sort of scheme.

    In the end, it all worked great. It was fully ten times slower than the solution already built into Solr, but it worked.

    The startup later ran out of runway trying to build a big-data-sized in-memory distributed database from scratch to speed up search. The founder (also the lead developer while I was there) insisted that everyone use raw C-style arrays and a custom in-house hash table implementation because he thought STL was too slow. Basically, "not invented here" was in the DNA of that company. I'm surprised we even used commodity hardware and didn't design some kind of in-house search coprocessor that would do everything in silicon.

    • mooreds 1988 days ago
      ... it all "worked great". Wow, what a mess.
  • bayindirh 1988 days ago
    Obligatory XKCD: https://xkcd.com/303/
  • shapiro92 1988 days ago
    Huge ecommerce company. Code written from the 90s in .NET was ported into PHP using an auto-library... nothing else to say
  • sigi45 1989 days ago
    ah my godness that brings back bad memories... :|

    I was at a project were every new developer said "i have never seen such bad code". Srsly! 3 new People said it and me as well.

    - Bad testability - no tests - hidden 3 bad bugs surfaced in just one year - ...

    I had very little trust in that code. And it was not much fun. On Upside: Cleaning it up was a great feeling.

  • waleedsaud 1986 days ago
    Thank you for the question it gave us so much knowledge in the replies.

    Wish you all the best

  • zerr 1988 days ago
    A million lines of mostly 90s style C++ codebase - works like a charm :)
  • sonaltr 1989 days ago
    One of the startups I worked at had 3 frameworks in PHP - all grown, all messed up in all the wrong ways - and that was just the backend. We had a homegrown framework written in JQuery for the frontend as well. never Again.
    • nickthemagicman 1989 days ago
      And probably making a ton of money so biz didn't want to replace it.
  • Jaruzel 1988 days ago
    Every line of code I've written over the past 30 years.
  • lumost 1989 days ago
    100k lines of semi generated, semi handrolled java/c
  • admin1234 1988 days ago
    I made scones with the wrong flour once. (¿)
  • cybervegan 1988 days ago
    Amavisd. About 11,000 lines of crufty perl.
  • sandwell 1988 days ago
    The most insidious cases are those where bad processes are developed around bad software, leading to a vicious cycle of co-dependency.

    I once worked for a reasonably large business that ran all of their invoicing and stock management through an MS Access project. The whole thing was wacky. half the business logic was implemented in VBA. The other half was stored procedures, but there was no obvious pattern (predictably, anything that involved a task the DB was optimised for was written in VBA). "Deployment" consisted of saving to a network drive and waiting until everyone opened it again in the morning. Data integrity seemed optional and inconsistent. The symbol naming convention could be described as cryptographic. The only documentation was a comment over each function stating the original developer's name and a timestamp. It had a proud splash screen stating "Developed by Dave Davidson" that was shown for 5 seconds on startup - this was of course completely simulated and there was no reason to have a splash screen. I could never quite fathom the magnitude of this guys delusions of grandeur or why they would want to put their name to it.

    The worst part about it all was that for the most part, it worked. The parts that didn't work were well known to the people using it and worked around. So processes were developed around it, and over time these became so deeply ingrained in the teams using it that they couldn't imagine working any other way. Most of this consisted of taking telephone orders, printing off the invoices that were generating in Access, and then rekeying that information into an accounting system (for efficiency purposes of course, these printed copies were passed to another team member with notes scribbled on the original document. Mistakes were commonplace and accepted as a CoB).

    Part of my role was to implement a web based ordering system. We did a reasonably good job but of course had plenty of our own WTFs. The biggest pain was integrating this with a particular team. They could not imagine a process that did not involved printing off orders and rekeying. After a while we realised that the reason we got so much push back was that once fully integrated, our system would make half of the team members redundant.

    With support from management we went ahead, and over time the wrongs were righted. When I left there was still a deep level of distrust in the new system. Mistakes that were daily occurrences in the old world were "proof that the new system won't work". Orders were still printed "just to make sure I don't lose it". New bugs were treated as if the sky were falling. I would spend more time managing expectations than writing code. But we got there in the end.

    Buggy software can be fixed. Buggy humans are an entirely different kettle of fish.

  • bitwize 1989 days ago
    Windows.
  • ivthreadp110 1988 days ago
    SAS Viya. Enough said.
  • bigiain 1989 days ago
    git ls-files | xargs wc -l

    <puts face in palms, starts weeping gently>

    • k7f 1988 days ago
      cloc .
  • weatherlight 1989 days ago
    The Internet.
  • jbigelow76 1989 days ago
    While not related to LOC, warranted nonetheless: https://xkcd.com/2030/
  • aerovistae 1989 days ago
    The entire Athenahealth stack.
  • superdex 1988 days ago
    if it works, it's not bad code
    • k_ 1988 days ago
      I don't think I want to work with you, ever :)
  • PavlovsCat 1989 days ago
    My own CMS. Here's the function signature of the heart of it, which depending on circumstance may call itself recursively.

        function a_nodes_list_trees(
        	$site_url,
        	$base_url,
        	$mode,
        	$site_id,
        	$max_subtree_depth,
        	$nlevels,
        	$node_id = 0,
        	$tree_id = 0,
        	$nleft = 0,
        	$nright = 0,
        	$nlevel = 0,
        	$current_subtree_depth = 0,
        	$flags = 0,
        	$order = 'tree',
        	$inline = false, /* needed when loading stuff via ajax*/
        	$skipped_types = [],
        	$items_per_page = 10,
        	$min_tag_count = 2,
        	$min_author_count = 2,
        	$is_site_root = false,
        	$skip_pagination = false,
        	$ignore_404 = false,
        	$skip_display_options_and_batch_ops = false,
        	$display_bottom_pagination = true)
    
    As you can tell from the default values, it started out having 6 arguments. And it has things like this in it:

        	$mysql_the_rest = 'FROM
        			'.DBTP.'node n
        				LEFT JOIN
        					'.DBTP.'node n2
        						ON
        							n.tree_id = n2.id
        						AND
        							n2.perm_view & ' . A_PERMS . '
        			'.$mysql_join.'
        		WHERE
        			'.$mysql_where2.'
        			'.$mysql_where.'
        			n.perm_view & ' . A_PERMS . '
        		AND
        			n.site_id = ' . $site_id . '
        		';
        	$total_count = $A->db->fetch_one_column('c', ' SELECT COUNT(*) c '.$mysql_the_rest);
    
    .. you know? The whole CMS is 20k lines of PHP, with HTML, PHP, and MySQL all happily living together in the same files (it's not that I don't have templates, I just have plenty HTML in the PHP, too)

    Yet, it works like a charm, PHP updates made it faster even, and I can use it for everything I needed so far, and use its output in a variety of ways. I still want to rewrite it, but it seems a lot of work to just shave off a few ms and have nicer code, with the same result for the visitor, and also having to write something that migrates the content. I suspect with enough content, it will slow down, and then I'll think about the next iteration. But it's still a mixture of pride, plain being happy to have it, and groaning whenever I fix a bug or add a feature.

    • d0m 1989 days ago
      An easy refactor for your function is to use a "Builder" for your params, i.e.:

      options = NodeBuilder .author({ min: 2 }) .flag({ .. }) .pagitation({ .. })

      function a_nodes_list_trees(options) { ... }

      Because in the end, if your function is fast and working correctly, it's fine even though it's a little bit messy. The problem is more all the code calling this function and having to pass dozens of params in the right order, and then it's a pain to start adding/changing parameters. Also, when calling this function, you probably need a bunch of temporary variables to "build" all the params; all those temporary variables could instead live inside that builder object.

      • PavlovsCat 1988 days ago
        I think I'll just use arrays, one for the caller, and one globally that has the default values. Fortunately that function isn't called in that many places.

        Making and then really using the CMS helped me with knowing what I would want in my next CMS, and just keeping the whole in mind from the start would probably make it a lot better by itself. My thinking is that the longer I put that off, the more languages improved and the more I hopefully learned in the meantime ^^

    • pmarreck 1989 days ago
      You're benefitting from an accurate mental model of this code (since you wrote it and work on it) and as soon as someone else has to work on this thing they are going to curse you.

      So it might be job security but it's also vulnerable to the hit-by-a-bus problem

      • PavlovsCat 1988 days ago
        When I started out, I wanted to opensource it, it had an installer and everything.. but then I realized it's become kind of a little monster and wisely canceled that plan, I'm not that irresponsible or mean :)
    • sonnyblarney 1989 days ago
      My god man that is beautiful. I'm going to put that on a poster.
    • abledon 1989 days ago
      I feel your accounts bio perfectly matches the spirit of your posts content here haha!
      • PavlovsCat 1989 days ago
        The quotes in my bio are by people who thought things through, while the CMS exploded in scope while I made it, so I'm not quite sure what you mean. Can you elaborate?
        • abledon 1989 days ago
          The verbosity and wall of text
          • PavlovsCat 1989 days ago
            Maybe, but at least it's more or less clear what I mean right away :P A million words conveying one unit of meaning are still more efficient than ten words conveying no meaning.

            I just thought about that when I saw a youtube comment saying "stolen" in response to a funny joke. I wondered, do they mean they intend to "steal it" because it's a good joke, or did they mean the person who posted the joke stole it from somewhere? Nobody will ever know, at least I for one won't sign in just to ask.

            I notice that since mobile devices, a lot of "communication" on the web these days is kinda like the "small talk" from Kevin from The Office US.

            https://www.youtube.com/watch?v=_K-L9uhsBLM

            If I hadn't asked what you meant, I and anyone else who read your comment would have had their own interpretation of it. I see a lot of comments like that on HN, where you would have to ask "what do you mean?" because it's totally unclear. A variation is stating something that is factually true but doesn't really refute anything, but the commenter clearly seems to mean something by stating that triviality, but don't say what it is. It's like dog whistles, but not for others to hear, but only the posters themselves know what they mean. Count me out.

            • abledon 1989 days ago
              "Maybe, but at least it's more or less clear what I mean right away :P A million words conveying one unit of meaning are still more efficient than ten words conveying no meaning."

              No need for 'atleast'. I'm not criticizing the length, its fun-ny because its another form of life and way of expression that others use, brings me joy when i see patterns in life, expressed in many forms, one being someone who is verbose writing a hilarious god function then inadvertently backs up that 'digital persona' posting with a god function-esque bio lol!

  • hanselot 1988 days ago
    How much code is in Windows?
  • excalibur 1989 days ago
    Windows