Hacker News new | past | comments | ask | show | jobs | submit login

I use Chatgpt for coding / API questions pretty frequently. It's bad at writing code with any kind of non-trivial design complexity.

There have been a bunch of times where I've asked it to write me a snippet of code, and it cheerfully gave me back something that doesn't work for one reason or another. Hallucinated methods are common. Then I ask it to check its code, and it'll find the error and give me back code with a different error. I'll repeat the process a few times before it eventually gets back to code that resembles its first attempt. Then I'll give up and write it myself.

As an example of a task that it failed to do: I asked it to write me an example Python function that runs a subprocess, prints its stdout transparently (so that I can use it for running interactive applications), but also records the process's stdout so that I can use it later. I wanted something that used non-blocking I/O methods, so that I didn't have to explicitly poll every N milliseconds or something.




Honestly I find that when GPT starts to lose the plot it's a good time to refactor and then keep on moving. "Break this into separate headers or modules and give me some YAML like markup with function names, return type, etc for each file." Or just use stubs instead of dumping every line of code in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: