Ask HN: Better tools for the software requirements / scoping phase?

Many software projects seem fail or go over-budget from poorly defined or changing specifications. It seems we have excellent tools to manage the delivery of software, but less so at the design/scoping phase. What are your thoughts of using better tools that leverage, say, Domain Driven Design or BDD (Behaviour Driven Development) that would engage end-users early on?

181 points | by castdoctor 2042 days ago

38 comments

  • mjul 2041 days ago
    The underlying assumption with requirements is often not stated explicitly: that people _can know_ everything in detail, in advance.

    If that is the case, surely we can find better ways to uncover the requirements, and better tooling will help solve the problem.

    Experience tells me that people don’t know everything beforehand. Thus the key assumption is not valid.

    Then the question we should be asking is: how do we most efficiently bring people to where they discover and understand the requirements?

    Experience tells me people are much better at giving concrete, specific feedback to a running system than to an abstract requirements document.

    Hence iterative development.

    In essence requirements are not a document by a process.

    • beat 2041 days ago
      This is largely wrong.

      No, people cannot know "everything in detail, in advance". That doesn't mean that they don't know anything. They know a lot. Nobody with any actual experience in requirements-gathering expects 100% perfection. So the underlying assumption about the underlying assumption is wrong.

      After 20+ years in this industry, I'm long past believing the conventional wisdom that running systems are the best way to gather better requirements. It's not agile. Think about it. A key part of agile is to push everything to the left as much as possible - to catch problems as early as possible in the cycle. What's earlier than before you write the code at all? Writing code to find out what's wrong with it from a requirements perspective is really inefficient.

      This isn't to say we shouldn't get working code out there as quickly as possible, or that feedback from working systems has no value. But this idea that it's the only way to get meaningful requirements, that's just BS.

      Requirements aren't a document, or a process - they are a system.

      • gilbetron 2041 days ago
        The OPs statement isn't wrong, or even largely wrong, it is largely right. There was no statement of skipping all requirements gather, but skipping the idea that you can do one requirements gathering and have everything you need to develop the entire system.

        Your push back to waterfall development is driving me crazy - we already tried that for decades and you can only get it to (kinda) work with a ridiculous investment that only makes sense for incredibly important systems, like launching a billion dollar rocket. And even then, you need iteration, just a more more careful, sandboxed type of iteration.

        • beat 2041 days ago
          So the OP was fighting a strawman. Like I said, nobody out in the real world believes in pure waterfall anymore. Everyone knows that, realistically, a completely up-front requirements process doesn't do enough.

          But the quote agile unquote response is every bit as reactionary, and does happen out in the real world... "You guys start writing code, I'll go get the requirements". Writing code is expensive, even in agile process. Just because you're doing two week iterations or continuous delivery doesn't mean you no longer waste time and effort on dead ends. You're just dying by a thousand cuts.

          Turning to user reactions to working code as the only requirements-gathering mechanism is stupid. Stupid. It ignores a ton of requirements issues that are not only complex, but dangerous to screw up - financial behavior, SOX and HIPAA compliance and other regulatory issues, and more. A mistake in initial implementation can cost millions of dollars, company reputation, and worse.

          And again, what the OP is proposing here is not agile. Just because you're tossing code over the wall in short sprints doesn't mean you're agile. Agile means catching potential problems as early as possible in the process. Catching problems with requirements is almost always going to be cheaper than catching them by writing code and finding out that the code is wrong.

          Agile requirements gathering is a thing, yo.

          • dolessdrugs 2040 days ago
            "nobody out in the real world believes in pure waterfall anymore"

            I'd allow that this might be true within large software organizations, this is definitely not the case where most software is written: in non-software organizations.

            • beat 2040 days ago
              I'm reminded of something a certain high-end ops director (responsible for a DevOps push at a Fortune 50) would tell his CxOs... "No matter what business you think you're in, you're in IT now".

              I work mostly in big enterprise companies. Whatever business they are in, they are "large software organizations", and they have decades of experience creating and evolving processes to suit the times and available. tech. You don't need to be Google to be an IT company. Any insurance company, any big-box retailer is an IT company. They know how to do this stuff, believe it or not.

              footnote: Don't judge big enterprise companies by what they were doing 20, 30 years ago. They were state of the art then, and they're often state of the art now.

              • dolessdrugs 2039 days ago
                It's a question of support though, in a non-software-selling org, as a dev, you are a cost center, not a profit center, so getting the tools or other things you need is not a business priority; in fact, any additional costs in the cost centers are only losses on the balance sheet. In a company that sells software (primarily), you are the profit center, so anything that can be done to facilitate your work is supported, as it drives the bottom line.

                footnote: just because they produce lots of software doesn't mean they've ever learned how to do it right. Ford is still a car company, Chase is still a financial company, Schlumberger is still an oilfield service company, despite all of them producing more software than some Software Companies.

                • beat 2038 days ago
                  Do you actually work in these environments, or are you making assumptions?

                  Resource contention is a problem in pure software companies, too. I used to work for a small pure software company in rapid growth. What did we have? Legacy code nightmares that were as bad as or worse than anything I've seen in the Fortune 500 (like building the core product on antique Borland C++ where there were only 9 licenses in the company and new licenses were no longer for sale and hadn't been for years, while the UI was written in Java Swing with a table kit from an out-of-business vendor). And almost all growth money went to expanding sales staff... engineering got screwed. They sold (and sell) terrible quality software, and they make a fortune at it.

                  Meanwhile, I'm at a massive health care company, and they hired me because they're committed to radical improvement in how the already-okay software is built and deployed. We're working hard on a serious continuous integration pipeline, and I expect us to be as good as anyone in a year - our reference points for "Why can't we do this?" are companies like Netflix. We're after that level of smoothness in the process, and we'll get there, or at least get close.

                  Don't let conventional wisdom tell you who is and isn't good at software.

                  edit: I'm reminded of going to a meetup about selling to the enterprise in Silicon Valley some years ago, and the twenty-something Stanford crowd were convinced that because these big companies have big failures, that they must suck. I pointed out that if you worked at a startup with $50M revenue, they'd be pretty successful, right? I've worked on several projects with annual development budgets larger than that. It's expensive and risky because they're operating at scales that most of the HN crowd can't even comprehend.

      • adrianN 2041 days ago
        I've found that writing (pseudo-)code is absolutely necessary to find problems in the requirements. Often enough the requirements are self contradictory or just contain too many unnecessary corner cases. I've seen requirements that sounded really simple in the requirement doc, but turned out to be extremely hard to test because they implicitly defined a state machine with dozens of transitions.
        • beat 2041 days ago
          Yes, definitely. This applies a lot to infrastructure issues, too. But pseudocode or extremely simple test case code can do this a lot better than tossing something into production to find out if it sucks.

          I suspect a lot of the HN hostility to proper requirements analysis is coming from writing trivial systems.

        • carlmr 2041 days ago
          Especially because English is often a terrible language to express requirements.
          • adrianN 2040 days ago
            Especially when it's written by people who aren't native speakers but work in a "we're a modern company now" environment.
      • ako 2040 days ago
        Core to agile is small incremental releases. Most technological innovation is done agile: in small releaseable increments. For example, We've been releasing small improvements for cars and planes for over 100 years. Every year a new model, with small improvements.

        Humans are really bad at designing and building large improvements from paper requirements. Small improvements mean you understand most of the requirements are known and tested, and only small parts are uncertain.

        The real problem is that testing requirements is really hard. You need to build the product to test the requirement. That's why most industries have an intermediate between requirements and product that is testable: this could be small scale prototypes, but more and more it's a virtual model that can be tested through software algorithms.

        If we want to make real progress in the software industry, we need to move beyond word documents with requirements that are by definition not testable, to testable software models that don't require a full implementation. Low-code, model driven development is an example where this is happening.

    • ealexhudson 2041 days ago
      The point of gathering requirements is not to "know everything". It's often taken that way because people like to blame the requirements: "We didn't build that because no-one gave us a requirement". You can have three reactions to that;

      1. accept the blame - beef up the requirements gathering process, attempt to gather ever more

      2. reject the blame - move to an agile process where everything is learned on-the-hoof

      3. reject the premise.

      People tend to either land in 1) or 2) above, but I think 3) is the correct place. Gathering requirements is about figuring out how much we know, identifying what we don't know, and working the risks. On some projects the risk is that we don't know enough about what customers really need (= agile engagement required). On others, the risk is literally all about delivery.

      Iterative development is great at addressing some risks. It doesn't address other risks at all; it's not well-suited in many instances where the information known up-front is substantial, or where it's difficult to engage users.

      The key is to recognise what problems you need to manage, and choose a suitable methodology to do it.

    • tootie 2041 days ago
      This is well-known and is the entire reason that agile exists. A lot of teams will write stories and run sprints and think they're doing it right. The actual definition is the ability to flex on scope and timing to meet changing requirements and priorities. Long-term estimation is just never going to be accurate, so setting a date and fixed scope is just automatically doomed.

      The strategy I use its to scope out as much as you can up front. A list of high-level user stories. Give these a rough prioritization (MoSCoW works) and some rough estimates on each. Now estimate your velocity with a few possible team configurations. Also, assume your backlog will grow about 10% as you go when new stuff is uncovered.

      Now if you need to schedule a launch or set a budget, set it deep into the non-mandatory features. If everything goes off the rails, you have cushion to avert failure. If everything goes ok, you will deliver a richer product. You'll also be able to track very accurately as you go how close you are to the plan week by week.

    • anotheryou 2041 days ago
      It really depends on what you do. If you have a well defined problem, that is complex enough to require quite some work, than I would say: Do as little as possible to be able to imagine as much of the workflow as possible, while also staying as flexible as possible.

      If a problem is less complex or can be released iteratively, than that's the lean way to do it, where you also have good learning. But often to solve the problem just a bit you already need a load of stuff to be taken care of.

      Key to me is to stay in text or cheap click-dummies for long enough. Depending on the complexity I go through several stages:

      generally:

      - Always probe for details if you can imagine some already, you are trying to know as much as possible as soon as possible. File it all in to a "to take care of later"-list at least, better yet sort in properly already.

      - write down everything (maybe others can remember everything, fine too :) )

      - change and refine whenever something new has to be taken care of. IN the following steps it will always be easier now, than in the next step.

      1. gather high level requirements with the stakeholders

      2. sketch a rough workflow. I usually do a nested list.

      3. write down a complete workflow

      4. now you might know what you need, so define a rough UI, technology, interfaces

      5. still in text: write your concept so someone else understands it

      6. talk everyone involved through the concept (first stakeholders, than devs)

      7. double-check if you cant simplify or leave out anything, at least for a first version

      8. if necessary: do mockups, designs, schemas

      9. only now start to program (for difficult stuff a prototype)

      - On top it might be helpful to have a small checklist depending on your needs with entries like "reporting?, testing?, support?"

    • mjdease 2041 days ago
      Agreed, I've been on many projects where a client only had vague requirements and useful clarification only came in response to seeing the app.

      This is reasonable, it's human, but does anyone have a good approach to applying an iterative development approach on fixed price contracts?

      I've been on many fixed price projects that are "agile" in name only, general issues I've observed:

      - Iteration on requirements becomes confrontational (pay for change request) making it difficult to build a good product as we all learn what does/doesn't work for users throughout development.

      - Upfront estimate is inaccurate causing time pressure on development resulting in rushed work which negatively impacts code quality and team learning.

      The traditional answer is to have the client commit to specific requirements and hold them to it.

      But what I'd really like to figure out is a way to acknowledge evolution of requirements will happen so we can work _with_ clients to build great products.

      I struggle because this seems incompatible with fixed price and large companies seem to only want to do fixed price.

      • repeek 2041 days ago
        In a past life as a project manager for a custom software consultancy, we had a rule of thumb based on experience that a functional prototype[0] takes about 25% of a total project's budget.

        Whether the contract was time & materials (preferred) or fixed bid, that 25% rule worked well as an early indicator that the project was likely to go over budget. It allowed us to have early conversations with the client about cutting scope or expanding the budget to cover the unknown complexity.

        We'd also dramatically increase our rate to reduce our risk on fixed price projects.

        [0] Barebones, ugly, but functional from end-to-end.

    • crdoconnor 2041 days ago
      >If that is the case, surely we can find better ways to uncover the requirements, and better tooling will help solve the problem. Experience tells me that people don’t know everything beforehand.

      This is highly context dependent. In some domains the business basically just needs to throw random things at the wall and see what sticks because nobody can know what they really "need" until it's tested in front of a customer. In other domains they have an incredibly detailed view of the behavior that they need.

      In some businesses they're in a weird situation of just not being very good at figuring out what they need and an improved process would save tons of time and money.

      In others nobody even thinks about any of this because their requirements are so simple and obvious nobody needs to.

      Iterative development is a lot of the time but it's not a panacea and it's not a replacement for fixing a broken requirements process when it's needed.

    • pytyper2 2041 days ago
      Sounds like you don't like to be measured.
    • chrisweekly 2041 days ago
      >"requirements are not a document by a process"

      by -> but

      • chrisweekly 2041 days ago
        FTR this wasn't a grammar nit; these two different words in this context have opposite meaning! So, as a former teacher of English as a Second Language, I offered the substitution in order to help make the meaning clear.

        Misguided downvote, imho. (shrug)

        • blackbrokkoli 2041 days ago
          How do they have opposite meaning? Can I use "by" to declare A and B as opposites in such a context? (Trying to learn, I never heard the usage of "by" like this)
          • chrisweekly 2041 days ago
            "not a (document [created] by a process)"

            vs

            "not a document but (rather) a process"

            In the former (as OP typed it), it's grammatically suspect but also seems to imply a missing "created" like I inserted. In that case it'd be ambiguous whether the OP feels requirements are not documents, or perhaps they are documents, just not ones "by a process".

            In the latter, which I took to be the intended meaning, OP is saying "requirements are a process, not a document."

            The "not X but Y" is grammatical and clear, equivalent to (boolean pseudocode) "Y && !X".

    • Pamar 2041 days ago
      Sad upvote... :(
  • hyperpallium 2041 days ago
    This was the inspiration for "Extreme Programming":

    You make a minimal cost mockup ASAP, for the client to try out, to see if it's what they wanted. Clients can't appreciate requirements, so (like, actual) architects make a scale model first.

    The alternative, of doing requirements as a separate phase, was ridiculed as the "waterfall model". In reality, there's interactions between the so-called phases of req, spec, design, code, test, maintain etc.

    The truth is that understanding the problem is most of the work - not just for the programming problem, but for the business problem. It's just difficult. And when the world changes, you just have to change with it. If you try to anticipate what's next, you'll invariably get it wrong.

    Because the world is changing faster, software develoment has gone from beautifully engineered software that just keeps working, to slapped together solutions, and a fulltime team that runs alongside, slapping patches on patches, continuously.

    What's the point of spending the time and money on beautiful engineering, if it's going to be scrapped tomorrow anyway (or even before it's finished)?

    The only hope is for tools, libraries, frameworks and languages, that address lower levels of abstraction, that change less frequently. This isn't all good, but the JVM is one example.

    • Tade0 2041 days ago
      I wonder how effective it would be to at first make the programmers realize all the business processes manually so that out of frustration they would start automating things?

      A so-called FDD - frustration driven development.

      • thedancollins 2041 days ago
        I like it! Some of my most effective code was written that way but the end result of such development tends to be users that feel "cut out" of the process, as if we did not value their input. And with the wrong users in that dynamic you could be turning water to wine and they will still find things to complain about.
        • Tade0 2041 days ago
          The other day I was tasked with extracting over 500 field names and associated labels from an excel file(it was sort of a form template).

          Obviously not something I graduated from University for, so I wrote a script that did it for me.

          FDD all the way.

    • mariopt 2041 days ago
      > What's the point of spending the time and money on beautiful engineering, if it's going to be scrapped tomorrow anyway (or even before it's finished)?

      Good luck maintaining that code. Don't forget about the team, a messy codebase and/or poor requirements, eventually, will break a team's morale until the day news devs come along and demand a refactor.

      I start to love the idea of a waterfall model for software. People can still iterate on the problem with wireframes, even high fidelity ones, and speaking with customers, market research, etc.

      There is a lot of value, time and money saved when you have a decent level of accuracy in the specification.

      • namdnay 2041 days ago
        It really depends on what you're building. There's a reason civil engineering isn't "Agile" - this also applies to major IT systems. A tiny difference in requirements can have massive impacts.

        Honestly, in these cases, there's nothing better than waterfall, partly to save development time, but mostly for contractual protection: If a small change can cost millions more (and from experience, they can), you need to know who is responsible for paying those millions...

        • geezerjay 2041 days ago
          > There's a reason civil engineering isn't "Agile"

          There are plenty of reasons why engineering isn't "agile" and it's only usable in software development, and one of the main reasons is that software development projects manage a single resource: man/hours. Building the wrong or inadequate solution has its cost, but rebuilding something from scratch does not require additional resources to be allocated to the project: just keep the same team working on it and results will pop up.

          Engineering projects are very different than software development projects. Materials and components are the driving cost of a project and there are plenty of stuff that must be done right from the very start. It's unthinkable to scrap a machine or building or tunnel midway through, and it's inconceivable that some disaster happens at all. Engineering either gets it right at every single stage of a project or there are serious consequences to deal with, which in some cases might even be criminal charges. If for some reason a prototype crashes during development then the project might be forced to shut down.

          • zolthrowaway 2041 days ago
            I 100% agree with you. Software is "soft". Making changes to a code base is very cheap. Iteration just makes sense in software even for critical systems. If you build a suboptimal bridge, you have to live with it. If you write suboptimal software, you can test the actual product and fix it before you even ship. You can't really test a bridge outside of simulations. GP's comparison is apples to oranges in a lot of ways.
            • geezerjay 2041 days ago
              > You can't really test a bridge outside of simulations.

              Just to pick a few nits, actually bridges are indeed tested during and after construction. It used to be standard practice to do test runs with near limit loads to inaugurate bridges, consisting of getting a fleet of military vehicles or water tankers to cross the bridge while surveyors monitored the bridge's response.

              Nowadays non-destructive testing techniques are favoured for a number of reasons, including the fact that sensor rigs can also be used throughout the structure's lifetime to help determine its fatigue life.

        • ovi256 2041 days ago
          >you need to know who is responsible for paying those millions

          In the macro view from 10000 meters high, it's always the client that pays. Even if he gets away without paying once, he'll pay it back (and more!) in stuffing on other projects. Because without that, the service provider goes under.

          Of course, occasionally, inexperienced service providers that do not stuff their projects do go under. This puts evolutionary pressure on the ecosystem so that the surviving service providers are selected over many such trials to know to stuff projects and invoices.

      • groestl 2041 days ago
        Software that is used, changes, because successful software influences the world around it, which in turn changes it's requirements. So typically, even if a team gets it right the first time (which by itself requires enormous effort), the once perfectly specified requirements will have changed shortly after release.

        Edit: sibling is right as well, it depends on what you're building. Sometimes there is nothing better than waterfall.

        • edejong 2041 days ago
          > Software that is used, changes,

          Yes, but... A good model can support larger changes than a bad model. For example, a well designed relational model can support iterative change better than a slapped together system using CSV. So does a system that supports a consistent mental model for the end-user.

          This is the fundamental skill: abstraction. To find the right abstractions, sustaining simplicity while opening up ability to change is an extremely difficult and hard-fought skill.

          Unfortunately, due to a constant influx of new developers, its value is underappreciated. Requirements for this skill are (non-exhaustive): excellent communicative abilities, combined with a predilection for logic reasoning, good technical understanding, some psychological understanding and perseverance for when a model proves unsuccesful.

          From my experience, the best systems we have designed and implemented started with long sessions at the white-board, often followed by some tech 'spikes' [1].

          [1] https://en.wikipedia.org/wiki/Spike_(software_development)

          • thedancollins 2041 days ago
            Survivorship bias/fooled by randomness. If a system can be abstracted to a model that can account for the bizarre stuff that the business throws at it, then the domain you are operating in is either Simple or slightly Complicated (as defined by the Cynefin framework). The real problem lies in Complex/Chaotic.
        • gt2 2041 days ago
          Beautifully said. Anyone can understand this when it's put this way.
      • thedancollins 2041 days ago
        Users want plausible deniability. And some need it. With iteration-based development you have the potential to avoid the "yeah, it meets the spec, but it is not what we wanted" problem.
      • crdoconnor 2041 days ago
        This was the exact attitude I held for the first 5 years of my career until I realized that:

        * There's a ridiculous amount of code that either gets tossed or goes almost totally unusued and time spent on beautiful engineering on that is 100% waste. This is, I think, why languages with incredibly strict type systems that try to 'force' good design up front never end up being widely deployed. They make prototyping way too expensive.

        * Only juniors demand to rewrite or refactor anything. Being a really good developer means being able to figure out creative ways to fix the most enormous technical debt incrementally and knowing that you shouldn't ever have to 'demand' to do that because you can just do it.

        * Pretty much every project I've ever seen that does anything useful starts out with nasty code because the programmers who worry about making it beautiful before it's useful never make it useful to begin with.

        * People who dream up architecture before writing the code dream up the worst architectures. The best architectures, on the other hand, happens as a result of a lot of time spent tediously refactoring code.

  • duncanawoods 2041 days ago
    You might be interested in https://thorny.io - an interactive notebook for decision-making.

    It's purpose is to capture design-rationale and bring it to life so that as you refine your reasoning, the changes ripple through your decisions. It can help communicating complicated design decisions that can be awkward to capture in prose.

    It's intended to be very low-friction so more like markdown than old requirements management or decision support systems. My dream is that it can help us tackle decisions so complicated we usually give-up e.g. when multiple decisions impact each other.

    I am currently in beta and I'd love to talk to anyone interested in the topic. Please drop me a line at duncan at thorny.io.

    • SyneRyder 2041 days ago
      That looks intriguing, though I'd be more interested if it was a desktop app rather than a website. It reminds me a bit of Soulver, though Soulver is focused more on numbers & calculations: https://www.acqualia.com/soulver/
      • duncanawoods 2041 days ago
        Soulver is very cool! We have lots of tools for calculations it's funny we have so few for reasoning.

        If beta-users find the web-app useful then I can release native apps but I must resist until its proven!

        • SyneRyder 2040 days ago
          Ahhh, excellent! I definitely agree with proving the demand for an app before spending time expanding it. Great strategy!
    • chrisweekly 2041 days ago
      Looks interesting -- and actually answers the OP's question! Thanks!
    • Pamar 2041 days ago
      I second the preference for a desktop app - if I decide to use it in my job, I can either ask the company to pay for a licence, or even just buy it myself if the price is not too high... but getting approval for having company data on someoneelse's systems would be a uphill battle.
      • duncanawoods 2041 days ago
        Yep, I understand. It's the plan.

        The stage I'm at is that it is experimental and I need to iterate features/usability to make it useful before going native. It has been written offline-first with tech suitable for shipping cross-platform native apps.

  • mienski 2041 days ago
    As someone that spends most of their days at the design/scoping phase, then watching the product go into development where it encounters constant misunderstandings and gotchas that the customer never told us about or never realised themselves, I completely agree that there is a huge disconnect between the scoping and requirements phase and the build phase.

    I almost have guilt over feeling like my work of design and scoping is effectively useless to a developer, all my mockup layouts have to be built ground-up, my requirements aren't actionable in any way unless they feel like reading them (I try to keep them as succinct as possible, but the nature of working for clients also means I have to be somewhat specific so that people know when they should actually pay us). I've looked into things like Cucumber (https://cucumber.io/) so that my requirements can actually be compiled as tests, but adoption is slow and arduous, and all I'm really doing is adding more work to a dev.

    My latest line of thinking is that I need a way to show the user interface, and then the data flow and logic all the way back through the system (usually a back-end DB or a customer legacy system). It's vital that these are presented together, hence my current process is interactive mock-ups built in Sketch (https://sketchapp.com/) and hosted on Invision (https://www.invisionapp.com/) which allows the customer and developer to click around and see it on a mobile screen so they really get a feel for it. Finally I couple that with a BPMN diagram which has swim lanes not just for the traditional system swim lanes, but also for a user (i.e. User taps Submit) and for a user interface (i.e. shows the mockup screen that is displayed), and then the logic flows down through the diagram. (e.g. User, User Interface, Mobile App Logic, Server Logic, Server DB, etc.)

    • beaker52 2041 days ago
      Can I suggest you involve your entire team in the discovery phase, meaning talking to the customer and interacting with the problem from the very start?

      Shared understanding (passing what you've learnt on behalf of the rest of the team doesn't count) can help everyone, (including the customer!) understand the whys, whats and hows of their own problem space. Then as a whole team you can come up with and vet the solution - qualified by the deeper understanding everyone in the team has.

      This deeper understanding will help you arrive at a better solution, wasting less time building the wrong thing and reduce the friction in "handing-off" to development (because there isn't a hand off).

      • roel_v 2041 days ago
        This here. Having someone who doesn't have to social skills of a house plant to talk to the customers, and then have the neckbeards in the back room code it up, is not the way to go, and nowadays I refuse to work with partners who work like this (or with subcontractors who want to force this workflow on me, and not limited to software dev either - I no longer work with e.g. building contractors who work like this either). And yes, I understand the point of specialization and division of labor, and yes in the past I too would have much preferred to just be the guy being handed perfect specs and never having to talk to anyone from the outside, and then when things go wrong there is always 'the spec' to blame. But it just doesn't work that way. In the past, being a 'business programmer' was called being an 'analyst-programmer'. I never really understood that, until I got to a point where I realized that the actual 'programming' (i.e., 'coding') is the easy part; it's the 'analysis' of the problem (well, and the formulation of a solution to the problem that comes out of that analysis) is the key to delivering value. But still, the relationship between the problem understanding, the solution and the implementation of that solution is so close that you just cannot completely separate them.

        I interviewed a bunch of firms for building a website last year; nothing particularly fancy. Several of them (at least the big firms) send in a guy who would always start off explaining their 'process' (all fancy sounding), that process essentially being 'you tell me what your problem is, then we will together design a solution, and then I'll hand you off to our project manager back at the office who will just have the programmers implement it; you'll never even have to see these guys face to face!'. Uh sure, probably to 95% of your customers, naive and gullible because of lack of experience, that sounds great and like it's an advantage, but no way I'm going to get caught with my pants down 6 months from now because there was some aspect we didn't cover in the 'design' but the programmers coded it up like that anyway because hey it say so in the spec, right?

    • pbowyer 2041 days ago
      > My latest line of thinking is that I need a way to show the user interface, and then the data flow and logic all the way back through the system (usually a back-end DB or a customer legacy system).

      Yes, this. It's what I've wanted as a software engineer and as a product manager. A unified overview.

      I find the project materials become siloed, and cross-referencing is time-consuming, error prone, and not done enough. There is often a written spec, and a separate set of wireframes, with clickable areas to show interactions. Those two need to be merged together. Then a bunch of JIRA tickets, again separate. These need to be associated with the wireframes and the spec.

      I have yet to find any system that makes this work.

  • ian0 2041 days ago
    Not a software tool but I was recently introduced to the design-sprint[1] methodology from google and found it helped a lot with the requirements gathering and speccing phases. It was also light and easily implementable - they have some resources there too.

    [1] https://designsprintkit.withgoogle.com/

  • andymoe 2041 days ago
    We do this thing we call Discovery & Framing at the kickoff of a new project. It usually lasts 3-6 weeks and involves design, product, engineering and someone with the power to make decisions on the spot. We call this role a product owner.

    It’s a good way to get stuff out of folks heads, validate the problem and solution with users, and end up with some stuff for everyone to execute on by the end of the process. You can google the term for more detailed descriptions.

    As for tools, we’ve been having success with real time board, and pivotal tracker (maybe gives away where I work) and of course a ton of sticky notes.

    • andymoe 2041 days ago
      Since this got some upvotes, if you want a taste for a process like this condensed into and hour you can visit:

      https://pivotal.io/office-hours

      (Free, staffed by a balanced team over a lunch hour)

  • fbonawiede 2041 days ago
    I'm a co-founder of a Swedish startup, and we have built a tool targeting product owners and product managers. It aims to eliminate endless email threads and disconnected workflows that are common in product development. It sort of replaces Word, Google docs, and Confluence and integrates with Jira and Slack.

    I would be happy to give you a quick demo!

    https://www.delibr.com/demo

    • dotancohen 2041 days ago
      I am seriously interested, however your website has almost no information and I'll not book a 30-minute appointment at a later time to see how it works. The only informative part of the whole website seemed to be this screenshot: https://www.delibr.com/img/design/features/section2/xstep1.p...

      Add more screenshots and less "integrates with slack".

      • fbonawiede 2041 days ago
        Thanks for the feedback! We have avoided putting too many screenshots on the webpage since we made changes all the time initially. However, we are now ready to put screenshots.

        "I'll not book..."? I guess you meant "I'll book..."? =)

        • ken 2041 days ago
          No, nobody wants to spend 30 minutes to see if a vague description of some software might work for them. There's not enough hours in the day. There's about 20 products mentioned here already. That's more than a whole day of doing nothing but having someone try to sell me something which probably doesn't do what I'm looking for.
          • fbonawiede 2040 days ago
            How about 10 minutes and you’ll get a Delibr T-shirt sent afterwards?
        • dotancohen 2041 days ago
          Actually it is "I'll not book". The time to make an impression on the user is while he is visiting your site.
          • fbonawiede 2040 days ago
            Will you change it to “I’ll book” if I send you a Delibr T-shirt?
            • dotancohen 2040 days ago
              I'll change it to "I'll book" because I see that you are seriously dedicated to the product!

              I don't need the T-Shirt but let's respect each other's time and try to do it in ten minutes. You personally are invited to contact me at "fbonawiede at dotancohen dot com" but please do not add me to any mailing lists. Thank you.

    • pronouncedjerry 2041 days ago
      I was introduced to this last week and I am going to try it on my next project. I see it superior to Google Docs for this use case.

      edit: looks like you have an admirer https://thorny.io/

      • a_dantorp 2041 days ago
        Google docs on steroids! ;-)
  • contingencies 2041 days ago
    Requirements are almost never perfectly specified. One of the most substantial and hard-learned parts of good systems design is automatically pre-empting, as far as reasonably possible, the probable range and depth of requirements scope shifts over the project lifetime. (Critically, that includes the 90% of a project's lifetime which is post-development maintenance.)

    In short: if you have good people it shouldn't matter! Experienced people will extend the assumed scope beyond the stated requirements without being asked, and either do so efficiently within the resources available to the project or bring an appropriate level of attention to the limitation before it becomes an issue.

    The somewhat less PC and far snarkier explanation, in the immortal words of twitter: Don't pick the right tool for the job, pick the right framework which secures you extra work for being a tool. - @iamdevloper (via http://github.com/globalcitizen/taoup)

  • mysterydip 2041 days ago
    I recently took a model-based systems engineering course that opened my eyes to some helpful methods.

    Use UML, SysML, etc to diagram out your requirements (starting at high level at first, more granular as the system matures). Now build a high-level model of your system design (software and hardware if required). Match parts of your system to the requirements. This lets you see gaps in two directions: requirements you haven't addressed (because they aren't connected as "satisfied by" anywhere on your model), and questions to ask for more detail on a given requirement that could drive the design.

    As others have said, there will always be late requirements that change things, so it won't resolve those. It can, however, show stakeholders how much changing a requirement can cost in terms of rework/time by showing how the model has to change to accommodate.

    The caveat with this is: the model needs to reflect your actual design, and you need to keep it up to date. The times I've used it so far it has been a useful exercise.

    • Pamar 2041 days ago
      UML? Really? Most Business Users want to discuss the UI and (if you are lucky) the data structures, two things that UML is not really very concerned with. In my experience, UML has been a really bad choice to work on requirements for corporate sw.
      • mysterydip 2041 days ago
        Agreed, it's not what business users want to (or should) see. This is an aid for what your team does internally, instead of just lists and documents.
  • BjoernKW 2042 days ago
    I agree that what you describe is a huge problem. I'm not so sure though if that problem can be alleviated by additional tooling. More often than not the cause of this is defective processes and assumptions rather than deficient tools.

    DDD, ubiquitous language and bounded contexts in particular, can be enormously helpful with defining better requirements.

    I'm not so sure about BDD though in this context. While the notion of the customer / product owner writing specifications in this format in a way developers can use these specifications to test their code sounds great at face value I have yet to see a project where this is done consistently and continuously.

    Moreover, some types of requirements can be better explained by using diagrams or UI mockups, which doesn't really fit the BDD paradigm.

    • mping 2041 days ago
      Agreed, it's not about the tools, but the process. I've worked with different agile coaches - most of them were crap, but one of them really struck a chord in how he would find flaws on the organisation that would reflect on the software side. He was really good at devising strategies to overcome this.

      To address the specific question, I think most of the time there's a problem because there is no one that takes care of making sure everyone has the same vision and understanding of what needs to be done. And because sometimes words fail us, I think mockups are a great way of uncovering and sharing requirements.

  • beat 2041 days ago
    It's not free, and it's not simple, but Aha (https://www.aha.io) can be very helpful in figuring out product.

    Engaging end users is a problem from a couple of directions. First, the very idea of "end user" - you need to engage customers, not end users. (Take Facebook, for example... you're not the customer, you're the product. If you build something for Facebook, your customer is Facebook, not Facebook's users.) This situation isn't exactly unusual. In this case, it's the customer's responsibility to gather feedback from end users - and put you in the feedback loop, as needed.

  • jdswain 2041 days ago
    There is a class of tools for this called Requirements Management. They tend to be expensive and clunky tools in my experience, but I haven't used one for quite a few years. Ones I have used are Rational Requisite Pro (IBM) and Doors, which is now apparently an IBM Rational product too. I've worked on a project that used Doors for requirements management but also requirements traceability, a somewhat tedious task that would let us look from a requirement and identify all development documentation that implemented that requirement and eventually to code if required.

    From an logical point of view I think it makes a lot of sense to start from clearly documented requirements, then work forward to design, implementation, and testing with links back to each requirement. I don't know how testing (higher level than unit testing) can really be effective unless they have requirements documents to use as their starting point. Use Cases go some of the way to providing this information but tend to be a bit less formal. Agile methodologies tend to be less formal than older methodologies that emphasised this kind of process more. A lot of Agile is good, but it does also tend to forget lessons from the past.

  • jimduk 2041 days ago
    People build tools for use. Software projects are an example of this. A lens I find helpful is to view a software project as a "process (or game) of transforming shared and individual understanding (or belief), into a tool or artefact". The project falls short if the understanding is not valid or the tool is poorly built or if the tool doesn't encapsulate the understanding or if circumstances change and the tool becomes less useful.

    Transferring valid understanding into the final artefact is a key constraint in many projects (reading Goldratt and thinking of this transfer of understanding as a constraint was helpful for me)

    There are many ways to fail. Some of the traditional ways to succeed are

    i) rely on an individual who really understands the desired tool, and who has the authority and skill to communicate and be the final arbiter (including sometimes to write it all themselves). Sometimes this can be also done by a small group.

    ii) write a really clear, well-written, very hard to misinterpret document and get professionals to develop and test this system (before the document goes significantly out of date)

    iii) Run an agile process where the business can describe what they want in small pieces that can be delivered so quickly that little understanding is lost

    Obviously what works is massive contextual, depending on the domain, funding, resource reqts etc. (Glen Alleman is good on this)

    So I would argue 'good software tools for requirements' are critically dependent on your approach for how you are going to turn 'understanding' into a 'system', and you don't want to worry too much about them until you are happy with your approach. At that point you can start building your 'meta-tools'.

  • a13n 2041 days ago
    I run Canny (https://canny.io), which is a tool that software companies use to keep track of feature requests from their customers.

    One awesome thing you can do is ping everyone interested in a feature, and ask them how they want it to work. See it in action here: https://feedback.canny.io/feature-requests/p/tags. (Scroll down to Sarah's comment on July 20.)

    Whenever we're thinking about building a feature, we ping all the stakeholders. This gives us solid context on how they want it to work, which helps us define our MVP. If we need to follow up for more information, it's easy to do that via email / Intercom.

    Hope this doesn't come off as salesy – I really felt it was relevant.

  • ryanmarsh 2041 days ago
    The solution to a “complex” problem is only knowable after the fact, versus merely complicated problem which is knowable before the fact. Any sufficiently useful computer program is typically “complex”, though not always.

    That said it is still incredibly helpful for everyone to agree on what we’re building in the present iteration/sprint. You mentioned BDD which can be very helpful when paired with a team practice such as Example Mapping[0].

    Other practices such as DDD can help you bound the problem and define it. Also there are some helpful lessons in Feature Driven Design.

    Source: I teach BDD workshops for a living

    [0] https://cucumber.io/blog/2015/12/08/example-mapping-introduc...

  • Redsquare 2041 days ago
    You can engage all stakeholders via a series of eventstorming sessions http://ziobrando.blogspot.com/2013/11/introducing-event-stor...
    • meh2frdf 2041 days ago
      Engaging isn’t sufficient, it’s a start. What is typically lacking is someone with the intelligence to properly understand the problem, and map out a roadmap to get there. There is too often a naive belief that the stakeholders will put you on the right path, they don’t, they just give you information that feeds into the design process. Sure you solution should be regularly validated with the users, however in my experience the tend to lack the ability to give a vision which isn’t just an incremental improvement on the current solution.
      • Redsquare 2040 days ago
        Note the word stakeholders, not simply users. Stakeholders includes the team actually delivering, not just the end client/users.
  • motohagiography 2041 days ago
    I've been doing a variation of this for security, as it's the main need that requires abstraction-on-down definition, and it doesn't translate well into agile environments where developers are designing solutions. (qtra.io, in private beta, so no free experience yet)

    The fit problem I'm having is getting technical threat hunters who populate the market to think about risk and security design, or product/project managers to reason even high level about a domain they delegate to technologists.

    Still iterating for fit, but so glad there is a thread of people thinking about architecture.

  • crdoconnor 2041 days ago
    I got frustrated with cucumber and cucumber-esque tools for doing BDD so I built my own which were optimized for programmer usability (strict type system, inheritance built in, sane syntax, etc.):

    http://hitchdev.com/hitchstory

    The time when it was most useful as a "BDD tool" was when I was working with an extremely technical stakeholder who was proposing behavior in a command line tool he wanted.

    I simply wrote 'tests' that described the command line tool's behavior and showed the YAML to him. He corrected the mistakes I'd made by misinterpreting his original requirements and then I built the tool to that spec and when it passed I was confident I'd nailed his requirements.

    QA picked up bugs afterwards but they were all either (quickly rectified) mistakes he'd made in the spec or environment issues. I had zero spec<->programmer communication issues even though (and here's the crazy part) the domain was very complex and I didn't understand it. It had to do with a highly customized software system I didn't understand which enacted some sort of financial process which I also didn't understand.

    Cucumber can do this in theory, but in practice the spec is not sufficiently expressive and the stories end up being ridiculously, unusably vague. Unit tests could also do this in theory I guess, but good fucking luck getting a stakeholder to read them even if you do manage to write them "readably".

    I'm taking this process a step further. Although these YAML specifications were useful for me in the past to collaborate with stakeholders they're still not amazingly readable. For instance, the "YAML inheritance" makes it easy for programmers to maintain but harder for non-technical stakeholders to understand.

    Instead of sacrificing maintainability for readability I created a process to easily generate readable stakeholder documentation from the 'YAML tests'. I used this in the libraries on the website above to generate always-in-sync-and-still-highly-readable documentation.

    I think this could be used to essentially write low level "unit" tests which generate documentation which stakeholders can interpret (e.g. defining the behavior of a complex pricing algorithm) and use that to get a quicker turnaround on understanding, defining and modifying desired behavior in complex domains.

  • txime 2041 days ago
    If you don't mind a shameless plug, we'd be happy to invite you on Txime, a collaborative webapp to conduct DDD and especially event storming sessions. http://www.txime.com We're in beta but opening soon.
  • shusson 2041 days ago
    In my experience focusing on the tools usually ends up failing projects or going over-budget. e.g "we don't have a product yet, but look at all these shiny cucumber tests!". I think a good start is to have someone who is very user focused in the team.
  • nickjj 2041 days ago
    I just use a whiteboard or paper. Nothing beats it IMO because at this stage you want to create a brain dump and chances are you'll be making a lot of changes.

    With most programs, you'll spend half your time battling its UI and trying to get around limitations.

  • sailfast 2041 days ago
    Many people have said this but here goes anyway: The tooling will not help you better define the specifications. The tooling will not help you manage changing specifications. You can cover all the bases easily in a free Github project (Edit: pick your web-based tool, basically) without too many issues.

    I would argue that the reason things get called out as poorly defined or change is because risks are not addressed early, and hypotheses are assumed to be theorems.

    Make sure your teams test the main assumptions early, with actual code if at all possible. That will call out why your stories aren't clear enough.

    Tools are useful, but they won't solve your problem.

  • bpizzi 2041 days ago
    Those days I just sketch wireframes in a Google Slides presentation.

    Bonus point for the interactivity: I can share the URL for remote work sessions, and the stakeholders on the other end of the line can see me create/adapt wireframes and notes in real time, while I'm explaining everything on the phone (using anonymous access for those not logged into 'The Google').

    Google Docs has version management since this year (I think), so I can pinpoint the evolution of specifications over the time. When I'm done I'll just export a PDF and attach to a "Spec done!" mail to everyone.

  • sigsergv 2041 days ago
    This is very advanced topic and I think (almost) all “end-users” are absolutely not ready to embrace it. You cannot be completely sure that you and “end-user” are thinking about the same subject.
  • agentultra 2041 days ago
    I think it depends on the problem domain. In high-risk or regulated industries having requirements and specifications is a prerequisite to shipping. There are plenty of good sources for finding the standard expected formats out there. If you're an IEEE member you'll find the ISO requirements and specifications templates in the library.

    Doing requirements gathering isn't an inherently bad process to adopt even in non-regulated settings. However I believe many developers will have a negative reaction due to prior experiences with, or having heard about, the waterfall method. The key to remember is that requirements don't, and shouldn't, have to come with an estimate. Great requirements demonstrate a thorough understanding of the problem. They're written from the perspective of the end-user.

    Specifications are the dual of requirements and are what drive the implementation that solves the stated problem. Specifications is something I think we need to get better at. We some unit testing, some integration tests, and occasionally end-to-end user tests... some few to property tests; but rarely do we write formal, verifiable specifications in a modelling language that can be checked with a model checker. Rarer still do we write proofs as part of our specifications.

    We often elide specifications or we do the minimum necessary to assure correctness. This is to the detriment of your team on a larger project. How do you know your designs don't contain flaws that could have been avoided? How much time are you spending before you realize that the locking mechanism you chose and shipped has a serious error in it that is causing your SLO's to slip?

    For requirements I just use plain old Markdown with pandoc. For specifications I use a mixture of Markdown and TLA+. I use TLA+ for the hard parts of the system that we're unsure about and require correctness for. The rest of the specifications that aren't as interesting I simply use prose. It's a matter of taste but it does require an intuition for abstraction... when to do it and how to think about systems abstractly.

    We could definitely use better tools for TLA+-style specifications, btw. Maye more libraries that can translate TLA+ state machines into the host language so that we can drive them from our integration tests, etc. Better IDE tooling. Better training.

  • lifeisstillgood 2041 days ago
    I have been mucking about with the idea of textual prose discussion documents, which hyperlink markdown style to usecases, which link to tickets in Jira or somesuch.

    then as the document is discussed and altered by the owner the use cases alter and flow

    It's just trying to keep a prose discussion in line with everything else in dev friendly manner (it can be stored in the docs folder in git and be generated onwards etc)

    still playing

  • flarg 2041 days ago
    Wireframes for UI design alongside UML for enterprise data models and process flows, and finally good old truth tables work best in my experience. They work to bridge the gap between business and development. You'll never get 100pc coverage but you'll get a lot of the way there.
  • tnolet 2041 days ago
    I loved reading "User Story Mapping" and putting it to use. It really changed how I worked. Highly recommended. http://shop.oreilly.com/product/0636920033851.do
  • smartmic 2041 days ago
    Coming back to the question: One possible helpful tool is Doorstop (https://github.com/jacebrowning/doorstop).

    I like the approach to embed requirements in source code (management).

  • cosinetau 2041 days ago
    Aren't there testing frameworks that were supposed to help solve this problem?

    Maybe you could engage customers qualitatively, and translate that information into software requirements and acceptance tests? I'm not entirely clear how you want to engage these kinds of users.

  • gbtw 2041 days ago
    We did a lot with jstd and used doors succesfully https://www.ibm.com/us-en/marketplace/rational-doors
  • tmaly 2041 days ago
    I experience issues with poor requirements everyday. The single biggest problem I see is that the people writing the requirements have no idea how to write them. We do not need a tool as much as we need better training.
  • bartkappenburg 2041 days ago
    I always refer people to this analogy on the problem of estimation in software projects, I think it's pretty spot on: http://qr.ae/TUGpbW
  • DanielBMarkham 2041 days ago
    Technical coach here. This is a passion of mine. I've seen far too many teams waste far too much time with horrible, horrible backlogs and tooling systems -- usually set up with the most sincere good intentions, by the way. I care so much about it I wrote a book on managing project information. The key example, repeated throughout the book, is a small team meeting folks and starting to make stuff people want. (Obligatory link: https://leanpub.com/info-ops )

    I believe if you can get the small team scenario working over-and-over again scaling will work itself out. So far I have no reason to believe this isn't the case -- and I've applied the principles in the book both to functional coding and program management.

    mjul's comment is the key one: you can't know everything before you start. That doesn't mean you can't know anything. It means that there is a "progressive elaboration" that has to happen on an as-needed basis. Otherwise you're stuck either not knowing enough to get going -- or having created a monster or a tools/information system that ends up running the project instead of the team.

    There are some sanity checks for whatever backlog/requirements system you are using. Instead of my continuing to pitch, I'll just list the things that your system should do no matter what kind of system you have.

    - Handle whatever detail is needed before actual development happens.

    - Be able to "flip-around" and start from scratch within an hour or so. (And "starting from scratch" means beginning with nothing and ending with the team starting coding) While keeping all that detail in test 1.

    - Be reusable with other teams doing similar work. A backlog/requirements system can't make work fungible, but it can enable better conversations in other teams without being a burden.

    - Limit meetings around organization to under an hour or so. Yep, you gotta have those "meta" meetings from time-to-time and talk about things like release plans. But they shouldn't take over your afternoon.

    - Help the larger organization (if there is one) learn and grow over time. Orgs learn from the bottom-up. Good tools should facilitate this learning.

    - Drive directly to acceptance tests. Good backlogs are testable. Things shouldn't be in there that don't drive tests.

    - Have controls in place to prevent abuse. As soon as you create some tool for the requirements/scoping phase, somebody is going to go all architecture-astronaut and overuse it. There has to be controls to prevent this from happening. The tool should facilitate the work of understanding and scoping, not replace it. (I think even a lot of tools vendors get confused on this one).

    I even wrote an analysis compiler that demonstrates all of this as part of writing the book. So if you think all of this is impossible -- happy to do a demo.

    There are a few other testable criteria for whatever your requirements/backlog system is, but that should be enough to get you started deciding whether some system is better or worse than another system.

  • slaymaker1907 2041 days ago
    I really like TiddlyWiki since it makes it easy to link everything together and organized.
  • j45 2041 days ago
    You may like a product roadmapping tool like Aha.io
  • wasd884 2041 days ago
    Better colleagues with better brains.