- If it's a straight-forward pure function, I tend to use Hypothesis to pile in random inputs. It's a great tool for coming up with cases humans wouldn't normally think of (I also wrote a blog post about this )
- If there was a show-stopping bug in production, I like to go back and write a test that covers that case (even if it was fixed).
- However in most cases, I cover off the typical use cases and I find, as another commenter says, it's "good enough".
This is exactly what TDD came about. It was designed to help you a) write code that you need and b) tests that validate the code does what it's suppose to do.
In terms of the latter, you can boil it down to two aspects.
a) Does the code function correctly given positive input?
b) Does the code fail in a deterministic manner? e.g., if an age is a negative number, does it' throw an appropriate error message?
In the failure case, it can get more trickier when you start to introduce things like database persistence or API calls. In this case, you should mock out these dependencies and also set up scenarios to make sure your code also fails in a deterministic manner.
Keep in mind, what's really important is to make sure your code only doesn't what's required and nothing more. Keep things simple and no simpler. ;)
Usually it's two groups: 1. regular usage as seen now or expected in the future, 2. exercising all existing error paths. I'm pretty confident that it's both a reasonable coverage and that there are going to be some missed patterns. It's "good enough"