- Tags:: #✍️OwnPosts, [[My engineering management principles, values, and practices]]
- Date:: [[2021-10-21]]
![[Pasted image 20211231114214.png]]
> If I have seen further it is by standing on the shoulders of giants
Isaac Newton (1675)
There are two topics that come and go in my list of obsessions that, until very recently, I didn't realize were related: company meetings, and tech interviews. I hate how they are usually held because I cannot prepare beforehand: the topic is usually unknown. And for a long time, I thought that I hated them because of my controlling character and it was something I had to overcome. After I learned a new word in Spanish, it suddenly just clicked: most people don't think you need to prepare for a meeting, or for a tech interview because "you are already prepared", because of the same huge misconceptions about knowledge work.
# Your average meeting
Picture this fairly common situation (at almost all companies I've worked for): a colleague (engineer, manager, PM, doesn't really matter) is dealing with a somewhat complex problem and wants feedback from a group of people on how to approach it. **Consider yourself fortunate if you get the objective of the meeting, much less an agenda.**
You go in there and your colleague starts telling you about the problem. If she suggested a meeting the problem is not likely to be simple: she tells you about all the different entities involved with details as they come to mind... The people at the meeting start struggling with all the moving parts, jumping between the different spanning topics, interleaving at-the-moment solutions and problems, going back and forth from a general view to specific (even irrelevant) points, talking in circles...
At some point, after some —usually, long— time, the meeting is over. And where is the [shared understanding](https://medium.com/nick-tune-tech-strategy-blog/domain-driven-architecture-diagrams-139a75acb578) of the meeting? Again, consider yourself fortunate if there is someone systematic in the room that tried to put this into some sort of writing... Most of the time you will try to reach a shared understanding over a very messy whiteboard and details will probably get lost or will take quite some time — more meetings — to figure them out.
There are several problems in this type of meeting, and they are an everyday tragedy with further repercussions than hampered productivity.
# Refusing to stand on the shoulders of giants
The main problem I want to focus on in this article is not doing previous research before starting to work on something complex. It's in this context where the Spanish word [[Adanismo]] stood out to me:
> *Adanismo: Hábito de comenzar una actividad cualquiera como si nadie la hubiera ejercitado anteriormente. / "*Adamism" (from Adam): the habit of starting an activity as if nobody did it before.
^925cdc
It's not like the concept is new, and there are some more popular expressions to refer to it, particularly in tech: "reinventing the wheel", or the "[not invented here](https://en.wikipedia.org/wiki/Not_invented_here)" syndrome. But the Spanish term captures beautifully the arrogance of that attitude. It is very likely that your problem is not new in the world. What did other (usually smarter, and who put more hours into the problem) people do? Which approaches worked and which didn't, in which contexts? I'm always surprised by this because to me it feels like common sense to review past work before attempting to have a crack at something. It is so helpful that, for example, [for Patrick Collison, Stripe CEO, feels like cheating](https://youtu.be/qrDZhAxpKrQ?t=676):
> _Interviewer_: but why do you, you spend more time reading history of everything, really, than almost anybody I follow on Twitter. Why do you do that?
>
> _Patrick:_ It's just, it's a way to cheat. Everyone else ignores all the good ideas from history, and so you can just be much smarter by just, you know, you could just try to think all these incredibly original thoughts by sort of sitting down and staring at the wall for days on end, or you can sort of cheat by just, you know, I mean they are, in fact, written down in books that you can just read.
And yet, you still find this behavior again and again. For example, not so long ago in HackerNews land, in [Reflections on 10,000 Hours of Programming](https://matt-rickard.com/reflections-on-10-000-hours-of-programming/?utm_source=Pointer&utm_campaign=4fccd833ab-ISSUE_243&utm_medium=email&utm_term=0_6ba2b83261-4fccd833ab-592192513), we could read:
> In many cases, what you're working on doesn't have an answer on the internet. That usually means the problem is hard or important, or both (...) Build your own tools for repeated workflows. There is nothing faster than using a tool you made yourself.
After almost 10 years of professional career, I cannot relate at all to these statements. Moreover, quite the opposite: I've almost made a career of guiding people to avoid tons of effort on things that are already solved (and feeling mediocre, but [that's another topic](https://en.wikipedia.org/wiki/Bullshit_Jobs)).
The consequences are terrible, not only short-term and in the context of your project: taking much more time and maybe not ever reaching the right solution. It is also very [limiting for all our careers](https://increment.com/software-architecture/architecture-for-generations/). By simply trying to learn on the job, you will very likely plateau and become [an expert beginner](https://daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner/?utm_source=pocket_mylist), not knowing ourselves (and probably even our organization) that we can do better. This is even more dramatic regarding the general advancement of our fields, as Patrick Collison further explains in that video, and more at length Jonathan Blow in his amazing "[Preventing the collapse of civilization](https://www.youtube.com/watch?v=pW-SOdj4Kkk)" talk.
# Why don't people "cheat"? Because...
I wish I knew the answer to this, and again, if I look up to smarter people, they seem to be clueless too:
![[Pasted image 20211231114315.png]]
[](https://twitter.com/paulg/status/1433709962497249284?s=)[https://twitter.com/paulg/status/1433709962497249284?s=](https://twitter.com/paulg/status/1433709962497249284?s=)
[](https://twitter.com/Scholars_Stage/status/1447744949747953669?s=20)[https://twitter.com/Scholars_Stage/status/1447744949747953669?s=20](https://twitter.com/Scholars_Stage/status/1447744949747953669?s=20)
And it seems it happens in all kinds of places.
![[Pasted image 20211231114331.png]]
[](https://twitter.com/andy_matuschak/status/1447409175596699652?s=20)[https://twitter.com/andy_matuschak/status/1447409175596699652?s=20](https://twitter.com/andy_matuschak/status/1447409175596699652?s=20)
There is probably a myriad of reasons, with the starting point being that most people don't even come to think of this as a problem, and more like the natural way of doing things. As [[Alexis Ohanian]], founder of Reddit, says in his book "[Without Their Permission](https://withouttheirpermission.com/)":
> [[🦜 We don't even realize something is broken until someone else shows us a better way]]
## ...ain't nobody got time for that
Reading previous work should be part of thinking about a problem. It seems, however, that most people don't have time to think. In [[📖 Peopleware 3rd Edition]] (1999!), the authors, in a chapter named "Make a cheeseburger, sell a cheeseburger", said:
> The steady-state cheeseburger mentality barely even pays lip service to the idea of thinking on the job. It's every inclination is to push the effort into 100-percent do-mode. **If an excuse is needed for the lack of think-time, the excuse is always time pressure (...)** It's when the truly Herculean effort is called for that we have to learn to do work less of the time and think about the work more. (p. 11).
There is an astonishing fear of entering in [[paralysis by analysis]] ([Wiki](https://en.wikipedia.org/wiki/Analysis_paralysis)). And yet this badly understood bias for action has been debunked many times, for example, in [this story](https://review.firstround.com/how-to-build-an-invention-machine-6-lessons-that-powered-amazons-success#lesson-1-slow-down-to-innovate) from early Amazon directors Colin Bryar and Bill Carr:
> Take AWS. It reached $10 billion in revenue in less than four years. But what's remarkable is that they didn't get there by forming a team, writing a lot of code, and then testing and iterating. In fact, it took more than 18 months before the engineers actually started to write code. Instead, they spent that time thinking deeply about the customers they were trying to serve and forming a clear vision for what AWS should be.
## ... experience gives them the most bang for their buck
Another fear of mine is that it is all a misunderstanding of incremental design as told by [[📖 Extreme Programming Explained]]:
> Because design has leverage and because design ideas improve with experience, patience is one of the most valuable skills a software designer can possess. There is an art to designing just enough to get feedback, and then using that feedback to improve the design enough to get the next round of feedback.
Kent Beck makes a broad distinction into the resources you can use when solving a problem: instinct, thought, and experience. And then, presents different abstract scenarios and how much you can gain from instinct, thought, and experience in them. Note that all the diagrams he introduces have in common that experience seems to be what most value adds. Which category would a literature review go into? If we believe it is part of the "thought" category, then let's take a look at scenarios such as this one:
![[Pasted image 20211231114350.png]]
> If pure thought creates most of the value without feedback (Figure 21), designing sooner makes more sense.
So "experience" is probably the most informative choice... if nobody has tried something similar before. But otherwise, in the process of trial and error that "experience" is, what is the point of unknowingly trying something that was tried before and failed?
And in any case... shouldn't "reading about what others did" count as "experience" (of others?).
As [Justin Etheredge points out in his learnings of 20 years:](https://www.simplethread.com/20-things-ive-learned-in-my-20-years-as-a-software-engineer/)
> If you don’t have a good grasp of the universe of what’s possible, you can’t design a good system
And it is a pity, because some of the feedback loops in the real world can take a huge time.
## ... because the fintech app they are building is unique
Maybe the problem is that we don't recognize previous efforts as similar efforts? The previous snippet from Extreme Programming starts with what I think is a very questionable idea:
> Part of what makes incremental design valuable in software is that **we are often writing applications for the first time.** Even if this is the umpteenth variation on a theme, there is always a better way to design the software.
I do believe there is always a better way to design a particular software, as much as there is always a better way to pour yourself a coffee. But I wonder what is the value of improving the design of the umpteenth variation on a theme, especially if it's in an area that is not the business you are in, and whether that qualifies as "writing an application for the first time". If you work in selling groceries online like me, what would be the value in, for example, building your own container orchestration platform instead of using Kubernetes? Would that make you sell more/better groceries? But again, even if you are able to justify any sort of competitive advantage for doing that... wouldn't you start from the state of the art on container orchestration (and thus, again, standing on the shoulders of giants)? And even if the problem you are solving is super obscure... what are the chances of it being completely new? Of not being related to any other problem in the world which has already been attempted?
[[Mihaly Csikszentmihalyi]], god of [[📖 Creativity. The Psychology of Discovery and Invention]], clarifies:
![[📖 Creativity. The Psychology of Discovery and Invention#^26f49d]]
In fact, it seems that new ideas are starting to require [depth in more than one field simultaneously](https://techcrunch.com/2020/07/19/the-dual-phd-problem-of-todays-startups/?guccounter=1).
## ... because they don't need to.
Another possible reason for refusing to stand on the shoulders of giants is overconfidence: the belief that one is perfectly capable, completely on its own (or just considering its team) to reach proper solutions.
There is nothing wrong with self-confidence, of course. And seasoned professionals surely can reach good solutions on their own. Unfortunately, I've observed no correlation between this kind of self-confidence and experience (with youngsters surprisingly talking about the power of intuition) or problem difficulty (with people going blindly into very hard problems with a long history of previous work).
I do believe in the power of intuition, there are multiple accounts of it, such as the [green lumber fallacy](https://fs.blog/2016/11/green-lumber-fallacy/) on Antifragile, and even scientific explanations for it such as the interplay between two modes of thought, one more instinctive, other more rational of [Thinking, Fast and Slow](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow).
However, the key point is that intuition is the fruit of accumulated experience (perhaps part of it transferred from other domains)... so young people shouldn't trust their guts much. Also, there are lots of counter-intuitive mechanisms in the world. Donatella H. Meadowns fills her [Thinking in Systems](https://www.indiebound.org/book/9781603580557) with many such examples:
> **Counterintuitive**—that’s Forrester’s word to describe complex systems. Leverage points frequently are not intuitive. Or if they are, we too often use them backward, systematically worsening whatever problems we are trying to solve. **I have come up with no quick or easy formulas for finding leverage points in complex and dynamic systems. Give me a few months or years and I’ll figure it out.** And I know from bitter experience that, because they are so counterintuitive, when I do discover a system’s leverage points, hardly anybody will believe me. Very frustrating—especially for those of us who yearn not just to understand complex systems, but to make the world work better.
What is the downside of peeking what others did before blindly following your intuition?
![[Pasted image 20211231114414.png]]
[](https://twitter.com/iamdevloper/status/1060067235316809729?s=20)[https://twitter.com/iamdevloper/status/1060067235316809729?s=20](https://twitter.com/iamdevloper/status/1060067235316809729?s=20)
Precisely, in the newest Kahneman book, [Noise. A Flaw in Human Judgment](https://www.indiebound.org/book/9780316451406) (p. 234), we can find that people who agree with statements such as "intuition is the best guide in making decisions" score low in "actively open-minded thinking".
> To be actively open-minded is to actively search for information that contradicts your preexisting hypotheses.
[Humility](http://humbletoolsmith.com/2020/08/10/the-importance-of-humility-in-software-development/) is important in intellectual affairs. The sad part is that there are incentives to give a different impression on the workplace. As the Spanish writer Francisco Umbral said in "Mortal y Rosa": "Mueve más una mentira firme que una verdad pensativa / _A firm lie moves more than a thoughtful truth":_
> The personality of people with excellent judgment may not fit the generally accepted stereotype of a decisive leader. People often tend to trust and like leaders who are firm and clear and who seem to know, immediately and deep in their bones, what is right. Such leaders inspire confidence. But the evidence suggests that if the goal is to reduce error, it is better for leaders (and others) to remain open to counterarguments and to know that they might be wrong. If they end up being decisive, it is at the end of a process, not at the start.
Note that this is by no means a critique to reasoning [from first principles](https://fs.blog/2018/04/first-principles/): that's not at odds with standing on the shoulders of giants. In fact, proper reasoning from first principles implies deep understanding of previous ideas:
> “Not everyone that’s a coach is really a coach. Some of them are just play stealers.” (...) While both the coach and the play stealer start from something that already exists, they generally have different results. Both the coach and the play stealer call successful plays and unsuccessful plays. Only the coach, however, can determine why a play was successful or unsuccessful and figure out how to adjust it. The coach, unlike the play stealer, understands what the play was designed to accomplish and where it went wrong, so he can easily course-correct. The play stealer has no idea what’s going on. He doesn’t understand the difference between something that didn’t work and something that played into the other team’s strengths.
## ... because they don't want to.
A good chunk of the people I've known who refused to stand on the shoulders of giants did so simply because it was fun to just figure it out for themselves. In [[The thinker - doer model]], everybody wants to be the thinker. Or as Justin Etheredge expressed:
> ... because we love complexity. [Coders] are going to err on the side of what they are good at. It is just human nature. Most software engineers are always going to err on the side of writing code, especially when a non-technical solution isn’t obvious.
Our motivation as workers is no small issue, not only for us, given its impact in our happiness, but [also for companies](https://www.forbes.com/sites/karlynborysenko/2019/05/02/how-much-are-your-disengaged-employees-costing-you/?sh=3367f2b33437). One could argue that relying to much on the work of others could have an impact on mastery and autonomy, two of the three pillars of motivation, according to [Daniel H. Pink](https://www.danpink.com/books/drive/). There are many flaws in this line of thinking. Reading previous work doesn't mean copying mindlessly, on the contrary, you have a better path to understanding, and thus, to mastery. And autonomy doesn't take a hit either: there are probably many options in previous literature to choose from, and you surely will need to make some tweaks. And finally, even if the solution is already out there, great, you should probably save your energy for harder, unsolved challenges.
Another way to look at this is dark incentives again: [Promotion Driven Development](https://twitter.com/GergelyOrosz/status/1442162670753431559?s=20). As Gergely Orosz puts it:
> Though at times building custom tooling is justified by unique needs, there is little to no incentive for any engineer to go with an off-the-shelf vendor solution that solves the problem. This approach would be labelled as trivial, and would not meet the complexity expectations the software engineering and architecture/design competencies demand at senior and above levels. This is one of the reasons why all of Big Tech will have built custom solution for everything (...) I'm not exaggerating: for example, Uber built and operated it's own chat system called uChat for years...
## ... because they don't trust others
> ...a group of engineers whose membership has been relatively stable for several years may begin to believe that it possesses a monopoly on knowledge in its area of specialization. Such a group, therefore, does not consider very seriously the possibility that outsiders might produce important new ideas or information relevant to the group (...) to the likely detriment of its performance (...) This has come to be known (...) as the “Not Invented Here” or “NIH” syndrome
[Investigating the Not Invented Here (NIH) syndrome: A look at the performance, tenure, and communication patterns of 50 R & D Project Groups (1982)](https://onlinelibrary.wiley.com/doi/10.1111/j.1467-9310.1982.tb00478.x)
Such an old problem: the canonical definition comes from this 1982 paper, but there are references to the problem way back, like this amazing "play" in IEEE Transactions on Engineering Management (1971): ["Not invented here: A psychoindustrial farce in one act"](https://ieeexplore.ieee.org/document/6448354/). Note that "others" here can even be members of other teams in your very same organization. While we may think this is very related to overconfidence, but at a group level in this case, it seems more connected to our inherent [[uncertainty]] avoidance:
> Underlying these kinds of change is the basic idea that over time individuals try to organize their work environments in a manner that reduces the amount of stress and [[uncertainty]] they must face
The good news is that this effect is related to the average length of time group members have worked together, and it is easy to counteract:
> it would seem that the energizing and destabilizing function of new members can prevent a project group from developing interactions and behaviours characteristic of the NIH syndrom
## Hey, at least I read Hacker News
Another classic misconception is one for which [Why I don’t read books | Doug McCune](http://dougmccune.com/blog/2007/03/23/why-i-dont-read-books/) could be a good summary: the most effective way to learn from past experience of others would be by hacking your way through posts and Stack Overflow. It is undeniable the importance of practice, of course, but limiting yourself to this is the best way to learn? Not if you want to have a deep and coherent view. From [[📖 The Pragmatic Programmer, 20th Anniversary Edition]]: ![[📖 The Pragmatic Programmer, 20th Anniversary Edition#^3d6272]]
Or as Martin Kleppman, author of the instant classic [Designing Data-Intensive Applications](https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/) writes on [Writing a book: is it worth it? — Martin Kleppmann’s blog](https://martin.kleppmann.com/2020/09/29/is-book-writing-worth-it.html):
> Of course, there are also plenty of free resources online: Wikipedia, blog posts, videos, Stack Overflow, API documentation, research papers, and so on. These are good as reference material for answering a concrete question that you have (such as “what are the parameters of the function foo?”), but **they are piecemeal fragments that are difficult to assemble into a coherent education.**
Not to mention that it is hard to get far on that path without critical thinking: [most tech content is bullshit.](https://www.aleksandra.codes/tech-content-consumer) (not to mention [[✍️ Mr. Obviedades|tweets, podcasts, and talks]]. _Link is in Spanish_).
But not only people refuse to stand on the shoulder of giants. In my next post I'll also talk about how and why they usually "refuse to stand on their own feet", and how all of this is connected to our broken hiring processes and to this never-ending search for talent in organizations, with poor retention mechanisms (though you probably see where I am going).