I don't know how to solve it, but it's insane. Especially for teams with high churning (typical in Silicon Valley where everybody constantly leaves for better opportunities), the code review culture changes from month to month depending on who's the latest "super picky specialist" on the team, with terrible dips in productivity.
The managers are not too involved with the code review process themselves and they pat themselves in the back just for seeing "very active and healthy code reviews", as measured in terms of review comments.
The problem is that code that 100% functionally works, with proper tests and good readability, gets blocked for weeks by __completely__ unproductive people who don't do anything other than driving a personal agenda of demanding abstract levels of refactorings. Everyone in software should have learned at this point that your fancy abstractions don't provide any benefit, since they won't solve your future problems anyway. Just make sure you refactor where you would otherwise be facing physical duplication of code/logic, and you'll be 100% good.
I've been victim of this myself lately: I'm known for writing code that is as simple as it can be, while maintaining functionality, scalability and correct handling of corner cases: I don't do abstract interfaces for the sake of it, I couldn't care less about factories, ...
Sadly, it's forcing me to likely look for opportunities out of my current company.
Nitpicky code standards should be the realm of automated sniffers and fixers.
Besides that there is no magic wand here. You have to do the work to create a good culture of code reviews in your team. This is not science, this is your engineering lead promoting good review practices and cracking down on harmful ones.
My experience with code reviews is that they always became adversarial. I tried to implement code reviews in my group many times and it didn't provide a lot of value.
We finally did it by using GitLab and requiring that every merge request to master must be reviewed by two team members. This provided code review on every piece of code instead of only on select pieces. GitLab allows us to enforce it easily, but the process can be done anywhere.
The nit-picky code reviewer gets overwhelmed defending his point of view and starts being a little more selective. The lazy programmer gets overwhelmed with constant changes and actually starts paying more attention. Somewhere along the way it has actually balanced out.
Discussions have allowed us to identify poor practices, new techniques, and implement some standards.
Where I saw it be successful it was a pretty good team actually. Like pretty much everyone on the team was a reasonable person. The way they did it was before you check into any branch (other than master) you need to get a code review from one other person. So you'd go into the chat room, ask for the review and either get a :shipit or an email/notification on github about whatever was flagged. That worked pretty good but idk if the main reason it worked is because it was a reasonable group of people.
Great question and thank you for making the rest of us think about it because it can become unnecessary or burdensome when used inappropriately.
I personally eschew the trivial or otherwise stupid company standards in favor of more language/community set standards (think PEP8). When I do criticize I will lead more with “what was your reasoning behind <this block of code>” because you get a more honest response and sometimes you’ll get the author to admit to “oh I was just being lazy there” or “do you have a better way of writing that”. Code reviews are a necessary evil but how you approach them can change the outcome.
> I've literally seen companies consider any change made in a code review to be a bug fixed so you get folks who pride themselves on flagging things in code review even though there is zero, and often negative, substance to the flag.
Do you not push back when this happens? You should be allowed to ignore some review comments. I think marking comments with priorities help (e.g. must fix vs could fix). When the reviewer knows you'll be reviewing their work as well this can help stop them being unfair.
- Have a code review wiki document you can refer to for clear cut cases of things you have to fix without debate (e.g. naming convention issues) and things that are more subjective and don't have to be fixed (e.g. there can be more than one return per function).
- Code review suggestions should come with priority levels. Ignoring "nice to fix" problems shouldn't be a big deal if they're too time consuming and have little merit. This also helps draw attention to people that flag low merit issues as high priority.
- Use linter tools and autoformaters to stop people wasting time with certain nitpicks.
- Ongoing team discussions and training with concrete examples of which code review nitpicks aren't good uses of time might help.
If you've got team members saying nitpicks are high priority and making no considerations about how much time certain fixes are going to take then it's a training and experience problem you're going to have to address.
Can you give some examples of code review feedback you're not finding useful?
As a team. Have a working agreement. What does good code and code reviews look like? What does it mean for a task to be ready to be started and what is your team's definition of done (level of testing, sign off of a code review, actually deployed, etc). These should be actual discussions and should probably be written down.
When it comes to a particular code review comment, either it is reasonable or unreasonable. If you disagree, say so with an argument. Referring to a working agreement or standard is helpful. Either that is enough or it is not. In the case that it is not and the issue continues to be unresolved, grab a more senior person (hopefully on the same team) to weigh in. Maybe the comment needs to slide for now, but become a ticket in the backlog. If somehow this is still not resolved, bring in your manager.