Which is mostly what I feel also happens with LLMs producing code. Useful to start with, but not more than that. We've still got a job us programmers. For the moment.
Producing code is like producing syntactically correct algebra. It has very little value on its own.
I’ve been trying to pair system design with ChatGPT and it feels just like talking with a person who’s confident and regurgitates trivia, but doesn’t really understand. No sense of self-contradiction, doubt, curiosity.
I’m very, very impressed with the language abilities and the regurgitation can be handy, but is there a single novel discovery by LLMs? Even a (semantic) simplification of a complicated theory would be valuable.