Metadata

Highlights

  • The bigger the PR, the more likely it will suffer from the so-called “LGTM syndrome”.
  • As we increase the size of the PR, we get to see less engagement
  • Smaller PRs take less time to write, which means we get feedback sooner,
  • All of these things positively contribute to all four key DORA metrics:
  • Deployment Frequency
  • Lead Time for Changes
  • Change Failure Rate
  • Time to Restore Service
  • As we can see from this scatter plot, as we decrease the size of a PR, teams doing async code reviews incurred exponentially longer wait times per PR size.
  • One of the reasons wait time per size dominates on the smaller PR size of the spectrum is that the engagement per size on smaller PRs is higher,
  • as we incur more wait time per PR size, we incur more waste per PR, leading to more cumulative waste in our system of work.
  • So, with the async way of working, we’re forced to make a trade-off between losing quality (big PRs) and losing throughput (small PRs).
  • any effort to get things out of the door sooner must involve reducing the transaction cost in the system, otherwise, we start batching the inventory throughout the system of work.
  • Once our learning cadence decelerates, we become risk-averse and tend to go with more conservative solutions because the system is signalling that the cost of learning is high.
  • they make it more likely for people to pull in more things, increasing Work in Process (WIP)
  • The conclusion of the study I did was that in order not to exponentially lose throughput as we reduce the average size of a PR, the author, and reviewer(s) need to react exponentially faster to each other’s request. And that eventually leads us to synchronous, continuous code reviews and co-creation patterns.
    • Note: An alternative could be to switching inmediately to co-creation once we are in a situation of people waiting for review