« My FeeLLMs
April 14, 2025 • ☕️ 4 min read
I’ve remained largely quiet about the surge of LLMs in software engineering, apps, and the broader world. I’ve held back because I think I have largely repetitive quandaries, qualms, and quips. For whatever reason, I’ve decided now is the time to write about it.
This post is largely a ping pong between my anxieties about LLMs and my conflicting feelings of anxious optimism.
I’m worried about the work that was stolen to make LLMs go. Unfortunately, it seems unlikely we’ll be returning Pandora to her box. I don’t think that means we should give up on doing better to upgrade our understanding of licensing works that wind up in LLMs. As much as possible, I prefer to avoid supporting folks that do things like read the entire internet regardless of licensing then complain when they believe someone… stole “their” IP. While LLM companies made magnificent leaps forward, I’m not convinced they had to ~steal from people to make such progress.
I’m excited by the forced progress on things like energy production, though I don’t feel much hope over our ability to either burn less or sufficiently convert the majority of our burning to renewables. Something about a guy named Jevon. So far the most I feel I can say about this is that it’s just a discomfort like that tag in my shirt I can’t find that’s scratching my skin. I suspect the longer term effects are greater than any of us can conceive yet. In reality some of the energy side-effects are likely already here but don’t affect my day-to-day grievances.
I listen to the Center for Humane Technology’s podcast, and they’ve been circling on an insidious concern around LLMs that has stuck with me: the unknowns around what this will do to us and our brains. While I’m concerned about the potential for lost jobs and a world unprepared to deal with it, I’m perhaps more concerned for the accidental reliance on it for things like critical thinking, research. I try my best to hit [tab] on Cursor when it’s things I believe to be skippable - things that are not my primary focus that could just be done, and are simple enough. But much like the algorithmic feeds, it’s hard to fully grasp what this will do to society and how we exist in a world where many people are already dubious of our ability to think critically. I offer no answers, only anxieties. It could just be a mild discomfort I have now, or it could slowly destroy us inside. It’s also the kind of thing that’s hard to fend off as any single individual.
Frustratingly, I find it immensely useful to let it do some of the things ([tab] on Cursor) that increasingly feel below what I could be focusing on. I get a little bit of a boost when I feel like it worked or got me 95% of the way without making me work as hard as I would have without it. It remains to be seen whether my feeling is the emotional response to getting the robots to do something for me or whether it actually allowed me to do something faster then put more focus elsewhere. I suspect it’s a little of both.
For whatever reason, this also feels like a moment to pontificate about what might be missing from LLM tools for software engineers. I see roughly 3 very broad angles that LLMs can help with in software engineering practices: new projects, existing projects, and operating a project.
A lot of the tools out there are good and seem focused on “new projects” (Lovable, Replit, Bolt.new, v0 [Vercel]) but I’ve had issues mostly with ongoing conversation, token limits, winding up in a back-and-forth with the LLM to get something right.
There are good tools out there for “existing code” (Cursor, Co-Pilot). As an experienced engineer, I find these the most useful. I have a project, I understand it, I want to point the LLM around to do things or just provide a snappy autocomplete to the work I already understand well.
The third angle: “I have code, I need something to make it run somewhere, stay running, etc.” is one I haven’t seen much of yet. There are probably lot of good reasons for that. My guess is that it’s harder to do well enough that we’re willing to let something go accidentally run our cloud bill or do something with less human oversight. The first two tools generally have a place for a human to interact. I’m not sure this third does or the interaction moments are very different from the first two.
I think something roughly groundbreaking Could be finding a way to get the third thing running, then finding a way to tie together these three things into one “build, iterate, and run software.” Then again, I’d like to have a job in 5 years so forget what I said, definitely don’t use LLMs for software.
In all seriousness though, I’ve found some use for LLMs in my day-job and side-projects. And while I try to keep up with an annoyingly fast-paced profession, I’ve found I do have to keep an eye on the software they’re creating. I believe they simply aren’t ready for unattended software building yet, and I suspect they won’t be until we can find more annoying ways to put the right context in front of them.
P.S. This post is a bit stream-of-consciousness and fails to go as deep as I’d like on anything in particular. I hope I’ll find some time to bring some more depth to each of these points as I get back into the habit of writing.
What are you most concerned about? Excited about?