rw-book-cover

Metadata

Highlights

Despite all this effort, we still struggle to deliver metrics that people can trust. As Tristan said a few days ago, we’ve built a massive technical system to support that goal, and yet, it’s still a system that most people side-eye—and some people work around entirely. The institution, to borrow Tristan’s term, is not good enough. (View Highlight)

our approach to earning that trust—Method 1[tracing the whole lineage of a metric, checking each step]—is fatally flawed. The road from raw data to reliable metric has a limitless variety of potholes; there can be no system, no matter how complete or comprehensive, that can tell us we’ve patched all of them. Contracts, observability tools, data tests—these are mallets for playing whack-a-mole against an infinite number of moles. (View Highlight)

More importantly, Method 1 isn’t how other people decide if they should trust the things that we produce. (View Highlight)

No, everyone else uses Method 2: “Do I believe this number, given what I believed yesterday? (View Highlight)

Of course, you could be wrong both times; matching numbers aren’t necessarily right numbers. But as far as rough and easily accessible heuristics go, it’s pretty good. And the more iterations that match—if a metric’s historical charts have been consistent for eight quarterly reports in a row—the more trust that it inspires (View Highlight)

The point here isn’t that these specific ideas are good; they might be terrible, or impossible to implement. The point is to reframe the problem around validating outputs instead of input (View Highlight)

in data, there are only two possible constants: Consistent metrics or consistent questions. Until we have the former, all we’ll get is the latter. (View Highlight)