Hacker News new | past | comments | ask | show | jobs | submit login
Kernighan's Lever (2012) (linusakesson.net)
96 points by tosh 39 days ago | hide | past | favorite | 18 comments



Reactive programming was my object lesson here. Years ago I had a very brief love affair with .NET's Reactive Extensions Framework for a streaming data intake pipeline for a BI system. It made writing the data transformation code so quick and easy!

That all evaporated within a month of initial release, when I had to diagnose my first production defect. It turns out that usable call stacks are a very nice thing to have in your exception reports! I could already see that, in the long run, this design was going to cost us more to maintain than it saved us in initial development costs. Soon after, the team and I agreed to revert to doing it the crufty old boring way, and we were ultimately glad we did.

I've subsequently had similar experiences with other "easy button" compute frameworks, IoC containers, etc. They were all high quality pieces of software that did a good job of doing what they do, and possibly did save on initial development costs. But when the beeper wakes you up at 3AM, nothing beats a readable call stack that corresponds very closely to the lexical structure of the code.

The moral of the story for me has been that it's always good to experiment with clever tools techniques. But you should also remember that you have the option to thank them for the learning experience they provided while choosing to put something unsexy into production.


The C# community has a problem with haphazard implementations of complex ideas. Nothing there is complete enough to add value, and yet everybody wants the shiniest and most convoluted tools.

Just because your event-based framework didn't give you debugging information, it doesn't mean that FRP sucks, and just because your IoC containers depend on undiscoverable undocumented voodoo setup procedures, it doesn't mean IoC containers suck.

And ok, now I'll do the meme... both of those things do suck (for almost every use case people put them through), but it's not because of the perfectly solvable issues you see on their C# implementations.


FWIW I'm not exclusively a .NET programmer and I've had similar experiences on Java and Python. (Sometimes more painful, sometimes less. I think .NET's usually pretty middle-of-the-road on this sort of thing.) My wake-up moment just happens to have happened while I was working on a .NET project.


Hm, what the post's author says is reasonable on its own, but I'm confident Kernighan's intents were really about avoiding clever code when simple, dumb code does the job. Or, as Rob Pike[0] puts it:

> Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)

> Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

It's a bit ironic then for this post to rely on such a convoluted interpretation instead of the simple, dumb one.

[0]: https://users.ece.utexas.edu/~adnan/pike.html


Yes, Kernighan absolutely didn't mean anything this article talks about.

It's been a while since I read Elements of Programming Style, but most of it is in the form of "here is some bad code we encountered, here is how we made it better, and this is the lesson we can learn from it". Some of these are pretty specific and outdated (e.g. IIRC "avoid Arithmetic IF" is one of them) and others are just basic common sense advice that holds up well today. It's all very down to earth and pragmatic and centred around real-world code.

Taking a quote out of that context and then musing that maybe perhaps possibly the author could have meant something different/deeper with it is not really helpful.


Yeah, it's a clever (ha!) interpretation, but I also don't think Kernighan meant it that way. Basically, what the article says is that yes, you should make your code as clever as possible, so that your future self (or others who will be fortunate (?) enough to work with it) may grow professionally by finding the obscure issues that are hidden within, or by trying to wrap their heads around it in order to be able to extend it (often in ways that are at odds with your clever design). I don't know about others, but I can imagine better ways of professional growth...


This is a great piece.

> ... since you are confident that you understand how the code works, having written it yourself, you feel that you must be able to figure out what is going on. ... Suddenly you see it, and you're blinded by a bright light as all the pieces fall into place.

There's a particular frame of mind like 'I feel that there's a bug in this specific bit of code'. I often find that once I make that shift it's suddenly easy to see the bug. I have tried to cultivate this feeling, but it's not easy to suddenly conjure up. A bit like flow state.


I've liked this post for a long time. It articulates a way of growing in this field that I haven't seen in a lot of places. Whenever I've posted it in a slack or similar it seems pretty divisive - some people don't seem to like it.


What don't they like about it?


I’ve heard that it’s depressing! Hard for me to get that one.


It is a pedagogically useful exercise to spent some time trying to figure out what went wrong before firing up the debugger or writing new test cases. One approach is to try to identify the assumptions, and especially the tacit ones, that would have to be true for the code to work, and which might, if false, lead to the specific error observed.

While I think it is wise to heed the warning implicit in Kernighan's aphorism, the question it poses can be answered by noting that finding errors provides additional information, allowing you to debug it without first getting any more clever. What makes debugging harder than writing is that in the former, you are dealing with reality, not your simplified, incomplete and possibly tacit assumptions about it.


I often think of it almost exactly backwards from that: "What it's doing should be impossible. So what precursor that I think is impossible would have to be possible for that to fail?" I often come up with two or three possibilities. And then: "How can I find out which one it is?" This often involves re-running it with enhanced logging.

And then repeat, with the new thing that should be impossible but is still happening, until I get to the root cause.


We frequently see what we expect to see. If I'm convinced something isn't in a certain location, I won't find it, digital or analog.


> 'I feel that there's a bug in this specific bit of code.'

The precise feeling I try to avoid leading PR reviewers into.


The article first talks about the benefits of sophisticated code, you can grow while debugging it. Further on the concept of flow is explained.

Somehow there is a mismatch.

In the graph depicting flow, you see that you achieve best results flow- (and as a bonus, grow-) wise, when you take up challenges slightly harder than according your comfort zone.

The mismatch is that debugging is quite a large notch harder than implementing. There might be a race condition, or you understood a used API differently from how it was implemented. Or some FPGA's software registers are poorly documented. So you spend a lot of time in the frustration/imposter syndrome zone, as opposed to the comforty growing, flow zone.

The effect is much more explicit when the code or design is not from your own hand. You first need to understand other people's clever tricks or paradigms and find the holes in the reasoning or even worse, in the several half baked refactorings that happened afterwards.


> Debugging is twice as hard...

Not an axiom I would agree with, and I find it strange the rest of the article builds on this assumption.


I'm always frustrated coding and debugging cause i'm next level :

(seriously though, liked the read and it makes me feel a bit more sane when i struggle to debug my own code! kind of feels weird that can happen but this puts a positive twist on it which i feel is likely quite true :))


It assumes that "Skill" is a 1d value. But the ability to write quality code and the ability to debug are a bit orthogonal.

Being better at debugging doesn't necessarily makes you better at writing less complex, more approachable code. Though debugging should teach you which patterns cause bugs or impede debugging.

And what do you do once you are so skilled your software doesn't have bugs? :P

Regarding self improvement, Aaron Swartz put it better here: https://web.archive.org/web/20240928215405/http://www.aarons... (tldr: do things you don't like)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: