
## Metadata
- Author: [[Benn stancil|Benn Stancil]]
- Full Title:: All I Want Is to Know What's Different
- Category:: #🗞️Articles
- Document Tags:: [[Data quality|Data Quality]],
- URL:: https://benn.substack.com/p/all-i-want-is-to-know-whats-different
- Finished date:: [[2023-07-15]]
## Highlights
> Despite all this effort, we still struggle to deliver metrics that people can trust. As Tristan [said a few days ago](https://roundup.getdbt.com/p/the-cultural-context-of-data#:~:text=If%20self%2Dservice%20has%20so%20far%20failed%20to%20achieve%20everything%20we%20know%20it%20can%2C%20this%20is%20a%20huge%20part%20of%20the%20reason.%20We%E2%80%99ve%20cast%20down%20a%20system%20that%20relied%20on%20human%2Dto%2Dhuman%20credibility%20and%20haven%E2%80%99t%20provided%20a%20clear%20alternative%20way%20to%20assess%20credibility%20to%20the%20people%20who%20rely%20on%20data%20to%20do%20their%20jobs.), we’ve built a massive technical system to support that goal, and yet, it’s still a system that most people side-eye—and some people work around entirely. The institution, to borrow Tristan’s term, is not good enough. ([View Highlight](https://read.readwise.io/read/01h5czvft9s30rbtvb2h7ws4n7))
> our approach to earning that trust—Method 1\[tracing the whole lineage of a metric, checking each step\]—is fatally flawed. The road from raw data to reliable metric has a limitless variety of potholes; there can be no system, no matter how complete or comprehensive, that can tell us we’ve patched all of them. **Contracts, observability tools, data tests—these are mallets for playing whack-a-mole against an infinite number of moles.** ([View Highlight](https://read.readwise.io/read/01h5czxq6xd3gr9eftfjvqsnb4))
^77cbea
> More importantly, Method 1 isn’t how other people decide if they should trust the things that we produce. ([View Highlight](https://read.readwise.io/read/01h5czy8bz07agzam07pww323n))
> No, [everyone else uses Method 2](https://locallyoptimistic.slack.com/archives/CHF1E9NUS/p1686243955544949?thread_ts=1686243897.349009&cid=CHF1E9NUS): “Do I believe this number, given what I believed yesterday? ([View Highlight](https://read.readwise.io/read/01h5czywf04epfd54dd0c85v33))
> Of course, you could be wrong both times; matching numbers aren’t necessarily *right* numbers. But as far as rough and easily accessible heuristics go, it’s pretty good. And the more iterations that match—if a metric’s historical charts have been consistent for eight quarterly reports in a row—the more trust that it inspires ([View Highlight](https://read.readwise.io/read/01h5d00m06p983bvwc8g2bem9d))
> The point here isn’t that these specific ideas are good; they might be terrible, or impossible to implement. The point is to reframe the problem around validating outputs instead of input ([View Highlight](https://read.readwise.io/read/01h5d0ah2zsb5w912tsaxjcygk))
> in data, there are only two possible constants: Consistent metrics or consistent questions. Until we have the former, all we’ll get is the latter. ([View Highlight](https://read.readwise.io/read/01h5d0cd29xdq6c1sej29pv1nm))