LLM-based coding agents like Claude Code & Codex are all the rage right now. Rightfully so, because these tools are actually getting good. They’re actually at the point where people, both programmers and less technical users, can use them to create features or even entire projects with decent results. I have a lot of feelings that I can’t cover in one blog post, but one thing feels like it’s becoming clear to me: I’ll likely never love a tool like Claude Code, even if I do use it, because I value the task it automates.
Whenever I use Claude Code, I notice that I stop having fun making software. That’s interesting, because many people report the opposite: that coding agents make computers fun again! I believe that this is due to a difference in values.
Like other technologies, AI coding tools help us automate tasks: specifically, the ones we don’t value. I use my dishwasher because I don’t value the process of hand-washing dishes. I only value the end result: clean dishes. Fabric is created with mechanical looms & knitting machines, because the economic value is the resulting fabric, not the process of creating it. Yet I still crochet & knit some items by hand, because I do enjoy the process.
People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software. At least, they value it far less than the end result. I think that’s quite a normal view of software & computing: it is a means to an end. But to me, creating & understanding software is a worthwhile pursuit on its own! I enjoy the process of representing a problem in code, and I enjoy learning & building a mental model of systems so that I can better understand and debug them. The resulting software product may have value, but it’s not the only value, or even the primary value to me. Put simply, I’m not a “product-focused” developer.
Now this isn’t to say that I value writing all code. Plenty of code is boring! I won’t benefit much from writing hundreds of lines of boilerplate code. And it’s not to say that I don’t value the product, or care deeply about designing software that solves problems! But the reason I got into software, and the reason I continue to do it, is that I just really like computers and want to learn more about them. The current zeitgeist of AI coding is to use the AI to do as much as possible as quickly as possible, and for me that just throws out the baby with the bathwater.
For the product-focused people out there, I’m sure a tool like Claude Code is a godsend. Finally, you can tell a system what you want and get a result, give feedback, and iterate. You can still make some technical decisions, but in a sense you become a “manager” of AI systems. You speak in terms of results and requirements, and you let your underlings handle the details. If the only value being created is the end result, then that totally makes sense. But for me, it’s just not the case.
As a result of this, I’m starting to be more conscious about my goals when it comes to software. When I crochet or knit something, my goal is not just to get a blanket, stuffed animal, scarf, etc. I could quickly buy those things cheaply. The goal is to create something with my hands and my time. The value of the object, for myself or especially as a gift, is in the time and care I put in while making it. I’m starting to apply that mentality to my goals about software.
I used to say that I wanted to “make X” for some value of X. But usually I want to build that X so that I can learn something. Maybe I want to understand the problem better. Or maybe I want to start a project in a new programming language so I can learn it. I’m finding it more helpful to acknowledge those things as part of the goal, and discern the extent to which I care about the process versus the result.
This is helpful because when I’m honest about my goal, I can choose the right process to actually get what I want. There are definitely tasks where I just value the result, not the process. I can learn & use these AI tools in those areas, leaving myself more time for the parts of the process I do enjoy. I’ve always possessed the laziness said to be a virtue of programmers. It’s an easy sell if I can automate something that I genuinely don’t want to be doing! But if I actually want to learn something, I have to do it the hard way.
That’s great for me, but what about my career? While I may value the process of coding and the learning I get from it, the real world tends to be cold and uncaring. The economy may not value my learning (at least in the short-term), and employers are turning more and more to AI to “increase productivity.” It would be foolish for me to believe that AI won’t change our industry. That said, I think there is a lot of nuance to whether professional software engineers are truly in danger, and I think that this question of values & goals directly ties into that nuance. The value of software development, even at big companies, is not the output alone.
My job is not actually to write code. I am employed to fix customer bugs in Linux. I read a lot of code. I write code to help diagnose, and ultimately fix those bugs. I write code to reproduce and get more familiar with some bugs. I work on debuggers and related tooling, in order to create more powerful tools to help me in debugging. I come up with ways to debug issues while respecting the (frequently onerous) constraints of customers.
So when I write code, the trick isn’t usually knowing how to write the code, but knowing what the feature should be.1 Even when the coding task itself is straightforward, I still find I get value out of it. I’m always building my knowledge and experience so I can do the job better. While there may be plenty of software where it doesn’t matter that an LLM wrote it, I do think that there will continue to be huge swaths of software where it does matter for a human to write it. There is so much value (knowledge & expertise) generated by that process, which is incredibly important for systems that need reliability and debuggability. Ultimately, I don’t want my computer’s OS to be vibe-coded, nor my bank’s systems, nor my car software.
I know better than to prognosticate, and I definitely don’t think anybody should trust my opinion on whether my own job will be eliminated. But I’d guess that it depends on the extent my goals & values align with the work that I do. I’ve been lucky enough to find a job that aligns with my values: it’s about technical expertise; understanding & debugging systems. So long as that alignment holds, I’m cautiously optimistic.