Since we are 48 months into AI being 6 months away from taking our jobs, it’s pretty clear to say that it’s not going to happen anytime soon.
I have been using AI to do some work. I have found it usefull for things like:
- Creating tools that I don’t really care to maintain, I just need them to help me with debuging or doing one off things
- Adding new features based on existing patterns. For example it’s great at taking something that I already have, and just creating something based on that.
- Doing simple refactors, e.g. I want to change a shape of an object. Yeah I can probably do it myself, but I also can just spin up a AI agent on the background to do it for me while I do something else.
- It’s great at being just tab autocomplete. It picks up on patterns great which is what I need the autocomplete to do 90% of the time anyways.
Other than that it’s lackluster.
Defention of correct
LLMs can’t tell what is correct unless you tell it exactly what is defenition of correct. This leads to many cases where it’s faster to write the code then to create a prompt that would lead an LLM to understand what exactly you want.
They can explore, but You still have to make it understand your specific needs. If your API has to return this specific status code, because the client expects it, that is something you have to tell it. If you need to return 200 OK, but set the the “error” field to “true”, because your legacy system doesn’t work otherwise you have to specify that. And that is the issue. These things are NEVER documented, it’s something you can learn by experience.
And that is the issue. LLMs can’t gather experience, you can define AGENTS.md or whatever files, but then you get into context rot and mangment issues.
And that is why LLMs feel nice and fun on new or small projects, but they end up feeling lackluster in real, messy things that we all work on.
Non-deterministic
Another issue is that if you create a prompt that is not descriptive enough, it ends up doing the wrong thing. Then you end up rerolling the prompt, or asking to modify it’s work. And then you just keep repeating this. Then you try another edit, it’s still not quite correct. And again. After like the 5rd try you understand that you could have just writen this yourself already.
It feels like trying to lead a horse to the water by throwing a carrot randomly in the waters direction, instead of just taking it by the reins.
Flow
A few years ago everyone was talking about the “flow state” everywhere. How to start it, how to maintain it, how to recapture it after a interuption. With AI agents it’s impossible. You write your prompt, and then you have to wait for atleast a few seconds for the feedback. There is no flow there. If the change is bigger you end up distracted.
And the biggest issue with the lack of flow is that, when you are in flow, everything happens in steps, you fix small issue by issue, and the things in your brain just fall into places. With LLMs you feed it a small thing in your prompt, and then it hits you with a HUGE BLOCK of changes that you have to review.
It’s like enjoying small waves hitting your feet while you walk on the beach vs being hit by a 2 meter wave every on your way to work at any moment.
This migth be one of the main reasons why the CEO types are loving this. They can’t have a flow state, since everyone always needs something, there is some meetings to be had or whatever. But with AI you can fire off your prompt, do your CEO thing, and then check back in whenever.
This is exact oposite of what your average developer tries to achieve.
In the end
AI is not usless. Learn the tools, use them. It might actually make you better. You will learn how to articulate your needs better, how to read and review code better.
Even become the CEO type and run AI agents on the background to do some tasks that you want to do, but don’t have the time to do it yourself.
But also learn hard skills. Learn how to architect systems, how to desing APIs. Learn how your OS works, how your DB works, how your language works. Learn new things, even use AI to do them. Code might become cheaper, and that means that hard skills will become even more valuable.
Don’t fall into the fearmonger trap that we are all out of work. If that was true Anthropic would be the biggest company in the world right now, they wouldn’t need more developers and they wouldn’t compare creating a TUI app to creating a small game engine. It’s just a real world example of how AI can’t replace real developers and their expertiese.