Metadata
- Author: seangoedecke.com
- Full Title:: To Avoid Being Replaced by LLMs, Do What They Can’t
- Category:: 🗞️Articles
- Document Tags:: Foundational models,
- URL:: https://www.seangoedecke.com/what-llms-cant-do
- Read date:: 2025-02-27
Highlights
• Problems are ill-defined and poorly-scoped • Solutions are difficult to verify • The total volume of code involved is massive In my view, this is describing legacy code: feature work in large established codebases. Translating requirements into the needed change in these codebases is hard. It’s even harder to be confident that you haven’t introduced another bug, given the combinatorial explosion of feature interactions. And the amount of code you have to read and write is massive: millions or tens of millions of lines. (View Highlight)
LLMs will eventually be able to do this kind of work. But it’s going to be a while, for a few reasons. First, it requires a better solution to the large-context problem: either significantly better RAG, or a fast and effective way to have a multi-million-token context window. If we’re very lucky, this problem will turn out to be impossible. Second, it’s hard to write a really good eval for legacy code adjustments. Current software engineering evals are relatively small in scope. Third, the relevant data is spread over a lot of very private silos. Facebook or Google can train on their own internal PRs or change requests, but no single AI lab is likely to have access to multiple companies’ (View Highlight)