I would rather they deliver a working chunk and just log.error when missing information is present.
Working in the manner they do is causing them to miss deadlines with nothing to show. I'd be ok if they missed deadlines but an app handled 50% of what it's supposed to do and it's well documented what it doesn't do, but we're getting to the finish line with no workable product.
I would rather they deliver a building and just let it collapse when an edge case occurs.
Working in the manner they do is causing them to miss deadlines with nothing to show. I'd be ok if they missed deadlines but a building could already be used for 50% and it's well documented what it doesn't do, but we're getting to the finish line with no useable building yet.
Dang, a moderator, is rather vigilant about this sort of thing.
But for sure. Tactic id use is get MVP working and have a ticket for edge cases. But really low quality devs will push janky code. Good devs will stop and make it solid. Great devs pick their battles and hit some deadlines and fix major corner cases..
https://www.businessinsider.com/when-amazon-launched-a-bug-a...
There's certainly a rational argument to be made that a project's progress is important and lines of "acceptable quality" should be drawn somewhere, but people are often not rational and respond to other things, and that's probably where your problem lies, IMO.
For example- I used to work in a team that maintained an enterprise UNIX kernel, and we had a space shuttle - like culture of obsessive failure prevention using meticulous process and checks, because the cost of a failure could mean [m|b]illions of dollars at our customer sites. I now create startups, and the incentives are the absolute opposite- I need to get _something_, _anything_ out the door as soon as possible because the most important question I need to answer is "does anyone want this?"
The reasons for either approach are not technical, and the source of your frustration, I suspect, is deeper than "convincing developers" (though I look forward to hearing what you have to say).
EDIT: this post also expands a bit on the carrots/sticks I was referring to above: https://www.verica.io/inhumanity-of-root-cause-analysis/
If something fails six months after release, who ends up paying the penalty?
Are your developers on-call for production issues? Will they be woken up at 3 in the morning, expected to solve customer issues caused by one of those corner cases? Or if it's someone else on call, will your developers experience the wrath of the person who _was_?
If a production issue occurs, does the person who produced that code get pulled off whatever new they are now working on, because "they're the best person to fix that issue"? Even if they've "moved on and up" and their new work is more interesting or prestigious than the project that has the issue?
Is their failure publicized across the whole business?
If data is corrupted because of a corner case, do your developers have to go through tedious processes to repair that data (with angry customers causing frustrated managers to breathe down their necks)?
Do they have to defend and justify to all of their peers why their code broke production because of a "corner case" that they were fully aware of and chose not to handle?
And even more importantly, if they build the 80% first, what guarantee do they have that you will let them write the last 20%, and not just move on to the next thing? Consider - if they build the corner cases first, they've guaranteed that they'll get all the scoped work done (because you won't ship without the other 80%), but if they front load that highly visible 80%, they will almost certainly get pulled off the project before they get to handle those corner cases.
All of these things are very strong motivators for teams to do exactly what you're asking them not to do. As a (presumably) product focused person, your reward happens as soon as your product is in customers' hands. You are rewarded for a product that is released quickly. But developers are frequently penalized for that.
When you ask your development team to focus on releasing incomplete software, you're asking them to treat _your_ success as more important than _theirs_.
That doesn't mean that what they're doing is necessarily right. You need to set up an environment where your teams' success is aligned to the thing you most want to have happen. Change the rewards and penalties in your company, and behavior will change itself.
Or, failing that, negotiate a compromise position between your desire to get something to market, and their desire not to continue paying the price for that speed for the rest of their tenure at the company.
The job of developing software is to take requirements and find the inconsistencies at every level of detail. The software is the design, not the requirements.
All software teams miss deadlines, it's a natural consequence of the fact that developing software is an exploratory process.
If you have developers requesting that specs be 100% perfect, then you have either (a) too junior devs who are lacking a senior mentor or (b) somehow the devs have gotten the idea that any imperfection, misinterpretation, or bug is going to reflect poorly on them, so they're trying to pass the buck - I see this a lot and it's usually a systemic culture problem.
If the edge cases arise out of contradictory or unsatisfiable requirements, it might make more sense to refine the requirements than to defer the difficult part of the implementation.
In short, this should be resolved by having open conversations with the team.
As a side note, do not blame devs for a project failure. If they did the work they were handed, and the project doesn't come together, the project leadership needs to own that. There is nothing that will trash morale on a dev team more than leadership throwing blame downhill.
I work on software that deals with millions of dollars daily; mistakes would be very costly and our team spans multiple timezones. Two things my team has going are that our lead dev understands the business case (10 years of industry experience - not just tech), when the dev team runs into an edge case our default is to email the interested parties (product owner, UX, QA) to say "I ran into this edge case that isn't covered by the stated requirements; please open a work item and prioritize accordingly". The lines of communication between DEV/UX/QA/Product needs to be very open and judgment free to deliver on an ongoing basis
This document is also helpful bc when the client or management comes to you and says something doesn't work, you can point to this as a known and prioritized problem.
In general, I'd say that desire to cover the edge cases is the better problem to have. I'd rather that than developers who aren't worried about gaps and issues.
Carving up the functionality in a way that can be implemented independently is a delicate process and not solely the responsibility of the developers.
It's also worth keeping in mind that the easiest way to solve a (jigsaw) puzzle is to start with the corner and edge cases.
I worked for a company that briefly had a "bug scoreboard," where QA were rewarded for finding a larger quantity of bugs, regardless of quality. Typos, weird UI things being 1px off (no joke), weird edge-case stuff like "if I load this item into my cart in Chrome, but then change the quantity in IE, and then try to check out in FireFox, it will say I'm not logged in..." The value was next to nil.
As was the company's way, each bug was logged (as is good), but because the bugs were usually found in the test environments, the devs who were actively doing the work felt like they were being directly attacked for focusing on the big picture and NOT the edge cases.
This ultimately lead to the QA director being demoted, once management realized how much money was being wasted on meaningless stuff and not being spent actually improving the site, but the damage was done -- some devs who were working in that environment will never trust QA again.
So, to simplify my question: do your devs feel punished by missing edge cases, or do they feel rewarded by providing functionality?
Also, if developers felt attacked when bugs (however minor) were reported, that's a pretty serious cultural issue, and again it was likely driven by the product manager's failure to actively take ownership of prioritization decisions. It's easy (and probably more common) to end up at the opposite end of the spectrum, where developers rightly complain about having to work with a bug-ridden codebase because new features continually get prioritized over bug fixes.
Bringing all of this back to the OP, I think all of this is still mostly true for handling edge cases. Ultimately it's the product manager's job to decide how robust the application needs to be, and to take ownership of that decision - and to take the blame if deadlines are missed due to dealing with obscure edge cases, or if those edge cases are ignored and come back to bite the company in the ass later.
But a cultural issue is ultimately what I was trying to say, and you kinda nailed it with the idea that cultural issues generally stem from someone not doing their job, or trying too hard to do someone else's.
Tell your devs to come work in banking or finance, they'll feel totally at home. We have 100 clients on a credit card that we sunset 10 years ago? Equally as valid as the 2.3 million clients on the highly profitable new credit card. GRRRRR.
2) Define invariants at the top of functions using assertions (https://en.wikipedia.org/wiki/Assertion_(software_developmen...). Make this a requirement in your style guide, and always do code reviews. Think of this like the MVP of handling edge cases in code, e.g.
3) Use Test Driven Development (https://en.wikipedia.org/wiki/Test-driven_development) to define the minimum required functionality. Write the high level integration tests, then give them to the developers to implement against.4) Reward success periodically by looking at how many tests each developer got passing. This could be in the form of looking at burndown charts during a sprint retrospective (https://en.wikipedia.org/wiki/Scrum_(software_development)).
If that is done, I start refactoring. I now know the problem space much better, making it easier to do sweeping changes of interfaces, and because the test are most focused on behavior and the interfaces, that shouldn’t require that much work.
When i’m Happy with the design, and the basics are working. I start working on the edge cases, and to refactor the internal implementation to something that is focused on maintainability.
So I think your approach should be to teach your colleges to first, just make it work.
That said, if you are a team lead, I think you should think of the psychology of your developers. Why are they so focused on the edge cases. Does the business blame them when it goes wrong. Are the stories over promising, leaving little interpretation for the developers to only think about the gold road.
If that is the case you should definitely be more involved when the features get specified. If everyone is clear what edge cases are not involved, then it will lift the burden of the person coding the feature.
Also, perhaps you should move the developers to projects were a more rigid approach fits the project. I know I’m pretty terrible to hack something together in a couple of hours. But I’m the person that works a week on it, so it’s stable, works and tested. Different kind of developer for different kind of projects.
Just make sure you encourage the developers to work the way you like, and to be there when shit hits the fan because the edge case is not as edgy as they expected.
Looks like you did not set your expectations correctly with your engineering counterpart.
Are the 20% left (btw a fifth of stuff missing feels huge) really “edge cases” to them ? If these are behaviors they personaly can’t accept in a shipping product I’d totaly see devs ignoring arbitrary deadlines.
For instance I don’t think most people woud want to ship potentially data destroying code, even with a PO on their back pestering for a release of the rest. And a defense mechanism for that would be to cripple pieces of the main scenario until the critical cases are covered, if they can’t otherwise negociate that.
I am basing that on the fact that it’s a recurring event and from specific devs.
> I would rather they deliver a working chunk and just log.error when missing information is present.
Another way to look at this: they might be "afraid" of doing log.error and thus trying to overpolish; it could be a symptom of a lack in internal alignment on how to support a customer or meet customer expectations when the code hits that log.error line.
Sounds like a scoping problem.
I pushed the teams implement user story mappings for each project and cut the fat to delivery it on time. The argument with the developers was basically around: "I know you want to plan ahead and put shine things on your resume, you can do that if you get the following 3 releases on first."