One of the trickier parts of working together on a software team is determining when to call work "finished". I recently helped improve the agile software development process on one of my teams as we started to feel growing pains expanding from two to five developers. We've been working under these general guidelines for a few weeks now, enjoying it significantly more than our previous approach.
Every project requires a different approach to this sort of thing, influenced by experience of the team, length of the project, personality quirks, and a thousand other factors. So, while this isn’t the final word on the subject, it should demonstrate some of the important features of a productive process, some of the mechanisms that I like for improving the effectiveness and velocity of software teams.
We run out software development on this team with Pivotal Tracker. Before making changes, we wrote and assigned stories each week in a long Sprint Planning meeting.
Our story acceptance used to happen during weekly Sprint Reviews, which, while lightweight, became inconsistent and error-prone as the project grew. The criteria for acceptance was too often “quickly rush through a demo and hope it doesn’t go too badly.” (Also called the That Looks About Right or TLAR method).
We replaced the process with an out-of-band delivery and acceptance process. When finished with a story, the implementing developer writes up instructions for testing and accepting the story, tags a reviewer, and marks the story as “delivered”. The reviewer is responsible for accepting or rejecting the story. All of this happens outside of meetings, giving implementers and reviewers the time and space needed to do a thorough job.
There is quite a bit of nuance involved in delivering and reviewing stories. Here are some guidelines for each.
Here are some concrete examples:
Story - Add new cheeses to the database Verify that more records were added to one of the new tables: heroku pg:psql -c "select count(*) from cheeses;" This returns 59. Previously returned 35. Note that the 59 cheeses matches the number of rows in the latest “cheeses.csv” provided by National Cheese Co.
Story - Upgrade Django version because of security vulnerabilities in our current version See the PR for changes made to get tests passing after the upgrade: https://github.com/cheesr/api/pull/1 See the build passing on Travis CI: https://travis-ci.com/builds/1
The natural tendency for startup teams is to brush off testing. Not necessarily because of professional negligence but because it is easy to forget about while focusing on implementation. Reviewers, as outside observers, are in an ideal position to think critically if more work is needed to verify a story (e.g. more automated tests, better verification steps, etc.) This is a worthwhile effort: one recent story turned out to need investment in a new piece of testing infrastructure to properly verify, immediately catching a half dozen new (and meaningful) bugs.
You should assign someone to review your story and follow up with them if it languishes too long. It helps to have a third person oversee the process and gently prod the team if too many stories back up in the queue. Maybe one day we will write a chat bot for this third person's role, photoshopping the story owner's face onto something embarrassing if their stories aren't accepted in time.
It gives them a chance to learn something new as well as get a fresh set of eyes on the problem. Having difficulty looping new people into the review process is a warning sign that a component has become too siloed.
A good team member will make this easy on you by packaging it up into a Pull Request and providing the link. A useful Github feature if you don't have neat PRs is the compare view.
Would it be difficult for someone else to modify the code implemented by the story?
The ideal implementation is always a balance between quality and velocity. Don't get hung up on perfecting the wrong things.
It might feel friendly to say "looks good to me" right away, but it is ultimately a disservice to everyone on the team. Quality has a place in even the simplest of prototyping projects.
Critiquing other people’s work has a significant emotional component on both sides. A bit of tact goes a long way in rolling smoothly (well, smoothly enough) over these bumps.
We've had a good time with this change in process for the last month. The model has provided more opportunities for collaboration and knowledge-sharing, which has helped our remote team greatly. We have uncovered and avoided a larger number of bugs than before, often much earlier. This came to bear in a significant way recently, as a challenging bug was uncovered that would have prevented the onboarding of a huge new customer. With this process, the bug was caught early and fixed immediately, just in time for a critical early demo. Our previous process would have left us scrambling at the last minute, perhaps losing the customer.