Skip to content

About LLMs

LLMs are great tools, but that’s it. They are tools.

And like any tool, they can be used well or poorly. Especially poorly when the user is not aware of their limitations. For junior developers, they can be a crutch that prevents them from learning the fundamentals of programming. LLMs do not understand code, they just predict the next token based on the input. They have no idea about the context, the intent, or the consequences of their output. And they can give users illusions of competence, which is very dangerous.

Proactive colleague who doesn’t know what they are doing

Section titled “Proactive colleague who doesn’t know what they are doing”

Do you know that one colleague who is always eager to help, but never really knows what they are doing? That’s what LLMs are like. They will happily generate code for you, a LOT OF CODE, but you have to review it carefully. It is often more work to fix the generated code than to write it from scratch.

LLMs are not AI in the sense of being intelligent or sentient. They are just very advanced autocomplete systems. If you have design under control and let them generate the boilerplate code, they can be very useful and speed up your workflow. But you have to be the one in control.

LLMs are not a replacement for human thinking (at least not yet). We don’t even know what “thinking” really is on the fundamental level.