Hacker News new | past | comments | ask | show | jobs | submit login

Great article. I just want to comment on this quote from the article:

"Really good developers do 90% or more of the work before they ever touch the keyboard;"

While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.

My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.

In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.

This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.

In yet different words: writing code should be actually seen as part of the "thinking process".




I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.

In the end, it's a bit of an art, coming up with the final working version.


Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This is special because most real world systems has a lot more dependencies. That’s when experimentation is required. Because one cannot know all relevant API’s beforehand and their behaviors. Therefore the only way is to do it and find out.

Algorithms are in essence mathematical problems, therefore is abstract and should be able to be solved in the head or use pen and paper.

Reality is that most programming problems are not algorithms but connecting and translating between systems. And these systems are like blackboxes that require exploration.


> Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This type of developer tends to prefer building software like this. There is a whole crowd of hardcore C/C++/Rust devs who eschew taking dependencies in favour of writing everything themselves (and they mostly nerd-snipe themselves with the excessive NIH syndrome, like Jonathon Blow off writing Powerpoint[1]...)

Torvalds seems to be mostly a special case in that he found a sufficiently low-level niche where extreme NIH is not a handicap.

[1]: https://www.youtube.com/watch?v=t2nkimbPphY&list=PLmV5I2fxai...


It's really easy to remember the semantics of C. At least if you punt a bit on the parts that savage you in the name of UB. You know what libc has in it, give or take, because it's tiny.

Therefore if you walk through the woodland thinking about a program to write in C there is exactly zero interrupt to check docs to see how some dependency might behave. There is no uncertainty over what it can do, or what things will cost you writing another language to do reasonably.

Further, when you come to write it, there's friction when you think "oh, I want a trie here", but that's of a very different nature to "my python dependency segfaults sometimes".

It's probably not a path to maximum output. From the programming is basically a computer game perspective it has a lot going for it.

Lua is basically the same. An underappreciated feature of a language is never, ever having to look up the docs to know how to express your idea.


ie closed off systems that don't interact with external systems.

and these are the type of coders that also favor types.

whereas if you do the 'informational' type programs as described by Rich hickey -- ie interact with outside systems a lot. you will find a lot of dependencies, and types get in the way


I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.

This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?


I'm more than half the time figuring out the environment. Just as you learn a new language by doing the exercises, I'm learning a bunch of stuff while I try to port our iptables semantics o firewalld: [a] gitlab CI/CD instead of Jenkins [b] getting firewalld (requires systemd) running in a container [c] the ansible firewalld module doesn't support --direct required for destination filtering [d] inventing a test suite for firewall rules, since the prebuilt I've found would involve weeks of yak shaving to get operating. So I'm simultaneously learning about four environments/languages at once - and this is typical for the kind of project I get assigned. There's a *lot* of exploratory coding happening. I didn't choose this stuff - it's part of the new requirements. I try for simple first, and often the tools don't support simple.


This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.


Then the component APIs need improvement.


Without an argument for this always being possible, this just looks like unjustified dogma from the Clean Code era.


At the microlevel (where we pass actual data objects between functions), the difference in the amount of work required between designing data layout "on paper" and "in code" is often negligible and not in favor of "paper", because some important interactions can sneak out of sight.

I do data flow diagrams a lot (to understand the domain, figure out dependencies, and draw rough component and procedure boundaries) but leave the details of data formats and APIs to exploratory coding. It still makes me change the diagrams, because I've missed something.


The real world bank processes themselves are significantly more complicated than for any one person to hold it in their head. Simplification is important but only until the point it still completes 100% of the required functionality.

Code also functions as documentation for the actual process. In many cases “whatever the software do” is the process itself.


If you can do that, sure. Architecting a clear design beforehand isn't always feasible though, especially when you're doing a thing for the first time or you're exploring what works and what doesn't, like in game programming, for example. And then, there are also the various levels at which designing and implementation takes place.

In the end, I find my mental picture is still the most important. And when that fades after a while, or for code written by someone else, then I just have to go read the code. Though it may exist, so far I haven't found a way that's obviously better.

Some thing I've tried (besides commenting code) are doing diagrams (they lose sync over time) and using AI assistants to explain code (not very useful yet). I didn't feel they made the difference, but we have to keep learning in this job.


Of course it can be helpful to do some prototyping to see which parts still need design improvements and to understand the problem space better. That's part of coming up with the good design and architecture, it takes work!


Sometimes, as code get’s written, it becomes clearer what kind of component split is better, which things can be cleanly separated and which less so.


I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.


Diagrams are what I tend to use as well, my background is Engineering (the non software kind) for solving engineering problems one of the first thing we are taught to do at uni is to sketch out the problem and I have somewhat carried that habit over when I need to write a computer program.

I map out on paper the logical steps my code needs to follow a bit like a flow chart tracking the change in states.

When I write code I'll create like a skeleton with placeholder functions I think I'll need as stubs and fill them out as I go, I'm not wedded to the design sometimes I'll remove/ replace etc whole sections as I get further in but it helps me think about it if I have the whole skeleton "on the page"


Well that explains why Git has such a god awful API. Maybe he should've done some prototyping too.


I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.


I started my career with Clearcase (ick) and added CVS for personal projects shortly after. CVS always kind of sucked, even compared with Clearcase. Subversion was a massive improvement, and I was pretty happy with it for a long time. I resisted moving from Subversion to Git for a while but eventually caved like nearly everyone else. After learning it sufficiently, I now enjoy Git, and I think the model it uses is better in nearly every way than Subversion.

But the point of the parent of your post is correct, in my opinion. The Git interface sucks. Subversion's was much more consistent, and therefore better. Imagine how much better Git could be if it had had a little more thought and consistency put into the interface.

I thought it was pretty universally agreed that the Git interface sucks. I'm surprised to see someone arguing otherwise.


Subversion was a major improvement over CVS, in that it actually had sane branching and atomic commits. (In CVS, if you commit multiple files, they're not actually committed in a single action - they're individual file-level transactions that are generally grouped together based on commit message and similar (but not identical!) timestamps.) Some weirdness like using paths for branching, but that's not a big deal.

I actually migrated my company from CVS to SVN in part so we could do branchy development effectively, and also so I personally could use git-svn to interact with the repo. We ended up eventually moving Mercurial since Git didn't have a good Windows story at the time. Mercurial and Git are pretty much equivalent in my experience, just they decided to give things confusing names. (git fetch / pull and hg fetch / pull have their meanings swapped)


> I thought it was pretty universally agreed

Depends what you consider “universally agreed”.

At least one person (me) thinks that: git interface is good enough as is (function>form here), regexps are not too terse - that’s the whole point of them.

Related if you squint a lot: https://prog21.dadgum.com/170.html


It's really hard to overstate how much of a sea change git was.

It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.


And all that because the company owning the commercial version control system they had been using free of charge until that point got greedy, and wanted them to start paying for its use.

Their greed literally killed their own business model, and brought us a better versioning system. Bless their greedy heart.


What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.


Those C functions are the API for git.


On the other side the hooks system of git is very good api design imo.


Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.

I can relate to that.


baseless conjecture


> I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

That sounds a bit weird. As I remember, linux developers were using semi-closed system called BitLocker (or something like that) for many years. For some reason the open source systems at that time weren't sufficient. The problems with BitLocker were constantly discussed, so it might be that Linus was thinking about the problems for years before he wrote git.


Well, if you want to take what I said literally, it seems I need to explain..

My point is, he thought about it for some time before he was free to start the work, then he laid down the basics in less than a week, so he was able to start using Git to build Git, polished it for a while and then turned it over.

Here's an interview with the man himself telling the story 10 years later, a very interesting read:

https://www.linuxfoundation.org/blog/blog/10-years-of-git-an...

https://en.wikipedia.org/wiki/Git#History


>> …and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

How very biblical. “And Torwalds saw everything that he had made, and behold, it was very good. And there was evening, and there was morning—the sixth day.”


> And there was evening, and there was morning—the sixth day.

I presume you're using zero-based numbering for this?


You've were downvoted but that part made me smile for the same reason :-)

> The seventh day he rested

This is obviously Sunday :-)


I tend to iterate.

I get a general idea, then start writing code; usually the "sticky" parts, where I anticipate the highest likelihood of trouble.

I've learned that I can't anticipate all the problems, and I really need to encounter them in practice.

This method often means that I need to throw out a lot of work.

I seldom write stuff down[0], until I know that I'm on the right track, which reduces what I call "Concrete Galoshes."[1]

[0] https://littlegreenviper.com/miscellany/evolutionary-design-...

[1] https://littlegreenviper.com/miscellany/concrete-galoshes/


I do the same, iterate. When I am happy with the code I imagine I've probably rewritten it roughly three times.

Now I could have spent that time "whiteboarding" and it's possible I would have come close to the same solution. But whiteboarding in my mind is still guessing, anticipating - coding is of course real.

I think that as you gain experience as a programmer you are able to intuit the right way to begin to code a problem, the iterating is still there but more incremental.


I think once you are an experienced programmer, beyond being able to break down the target state into chunks of task, you are able to intuit pitfalls/blockers within those chunks better than less experienced programmers.

An experienced programmer is also more cognizant of the importance of architectural decisions, hitting the balance between keeping things simple vs abstractions and the balance between making things flexible vs YAGNI.

Once those important bits are taken care of, rest of it is more or less personal style.


Yeah, while I understand rewrite-based iterations, and have certainly done them before, they've gotten less and less common over time because I'm thinking about projects at higher levels than I used to. The final design is more and more often what I already have in my head before I start.

I never hold all the code designed in my head at once, but it's more like multiple linked thoughts. One idea for the overall structure composed of multiple smaller pieces, then the smaller pieces each have their own design that I can individually hold in my head. Often recursively down, depending on how big the given project is and how much it naturally breaks down. There's certainly unknowns or bugs as I go, but it's usually more like handling an edge case than anything wrong with the design that ends in a rewrite.


I don’t think this methodology works, unless we are very experienced.

I wanted to work that way, when I was younger, but the results were seldom good.

Good judgment comes from experience. Experience comes from bad judgment.

-Attributed to Nasrudin


Who's Nasrudin?

Apparently this quote has been attributed to an Uncle Zeke :) [0]

[0]: https://quoteinvestigator.com/2017/02/23/judgment/


Nasrudin (or Nasreddin)[0] is an apocryphal Sufi priest, who is sort of a "collection bin" for wise and witty sayings. Great stories. Lots of humor, and lots of wisdom.

One of my "go-tos" from him, is the Smoke Seller[1]. I think that story applies to the Tech Scene.

I first heard the GC quote as attributed to Will Rogers, then, to Rita Mae Brown.

[0] https://en.wikipedia.org/wiki/Nasreddin

[1] https://www.tell-a-tale.com/nasreddin-hodja-story-smoke-sell...


Yeah, the same. I rewrite code until I'm happy with it. When starting new program, it might cause lots of time wasted because I might need to spend weeks rewriting and re-tossing everything until I feel I got it good enough. Tried to do it faster, but I just can't. The only way is to write a working code and reflect on it.

My only optimization of this process is to use Java and not just throw out everything, but keep refactoring. Idea allows for very quick and safe refactoring cycles, so I can iterate on overall architecture or any selected components.

I really envy on people who can get it right first time. I just can't, despite having 20 years of programming under my seat. And when time is tight and I need to accept obviously bad design, that what makes me burning out.


Nobody gets it right the first time.

Good design evolves from knowing the problem space.

Until you've explored it you don't know it.

I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.

And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.


The secret to designing entire applications in your head is to be intimately familiar with the underlying platform and gotcha's of the technology you're using. And the only way to learn those is to spend a lot of time in hands-on coding and active study. It also implies that you're using the same technology stack over and over and over again instead of pushing yourself into new areas. There's nothing wrong with this; I actually prefer sticking to the same tech stack, so I can focus on the problem itself; but I would note that the kind of 'great developer' in view here is probably fairly one-dimensional with respect to the tools they use.


You make me feel a lot better about my skill set!


I think first on a macro level, and use mind maps and diagrams to keep things linked and organised.

As I've grown older, the importance of architecture over micro decision has become blindingly apparent.

The micro can be optimised. Macro level decisions are often permanent.


I think this is probably a lot of the value of YAGNI.

The more crap you add the harder it is to fix bad architecture.

And the crap is often stuff that would be trivial to add if the bad architecture weren't there, so if you fix that you can add the feature when you need it in a week.


I think that's probably part of it; but on a really simple level with YAGNI you're not expending effort on something that isn't needed which reduces cost.

What I try to do is think about the classes of functionality that might be needed in the future. How could I build X feature in a years time?

Leave doors open, not closed.


Right, I always thought this is what TDD is used for, very often I design my code in tests and let it kind of guide my implementation.

I kind of imagine what the end result should be in my head (given value A and B, these rows should be X and Y), then write the tests in what I _think_ would be a good api for my system and go from there.

The end result is that my code is testable by default and I get to go through multiple cycles on Red -> Green -> Refactor until I end up being happy with.

Does anyone else work like this?


Sometimes, when I feel like I know the basic domain and can think of something reasonable that I could use as a "north star" test end-point, I work like this. Think anything that could be a unit test, or even a simple functional test. But when I don't know the domain, or if it's some complex system that I have to come up from scratch, writing tests first often makes no sense at all. Then I'd usually start literally drawing on paper, or typing descriptions of what it might do, then just start coding, and the tests come much, much later.

Right tool for the job, as always! The only thing I can't stand is blind zealotry to one way or the other. I've had to work with some real TDD zealots in the past, and long story short, I won't work with people like that again.


TDD comes up with some really novel designs sometimes.

Like, I expect it should look one way but after I'm done with a few TDD cycles I'm at a state that's either hard to get there or unnecessary.

I think this is why some people don't like TDD much, sometimes you have to let go of your ideas, or if you're stuck to them, you need to go back much earlier and try again.

I kind of like this though, makes it kind of like you're following a choose your own adventure book.


I prefer to write an initial implementation, and then in the testing process figure out which interfaces simplify my tests, and then I refactor the implementation to use those interfaces. Generally, this avoids unnecessary abstraction, as the interfaces for testing tend to be the same ones you might need for extensibility.


I'm not sure why folks think they need to hold all this in their head. For me, at least, the "think about the problem stage" involves a lot of:

* Scribbling out some basic design notes. * Figuring out the bits I'm not completely sure about * Maybe coding up some 'prototype' code to prove out the bits I'm less sure about * Repeat until I think I know what I'm doing/validated my assumptions * Put together a 'more formal' design to share with the team. Sometimes my coworkers think of things that I hadn't, so it's not quite a formality. :) * Code the thing up.

By the time I get to the "code it up" stage, I've probably put in most of the work. I've written things down as design, but nothing hyperdetailed. Just what I need to remind myself (and my team) of the decisions I've made, what the approach is, the rejected ideas, and what needs doing. Some projects need a fair bit on paper, some don't.


My pet theory is that developers exist on a spectrum between "planner" and "prototyper" - one extreme spends a lot of time thinking about a solution before putting it into code - hopefully hitting the goal on first attempt. The other iterates towards it. Both are good to have on the team.


I could not agree more, it's rare to write a program where you know all the dependencies, libraries you will use and the overall effect to other parts of the program by heart. So, gradual design process is best.

I would point out, though, that that part also touched understanding requirements, which is many times a very difficult process. We might have a technical requirement conjured, by someone less knowledgeable about the inner workings, from a customer requirement and the resolution of the technical requirement may not even closely address the end-users' use-case. So, a lot of time also goes into understanding what it is that the end-users actually need.


I agree this is how it often goes.

But this also makes it difficult to give accurate estimates because you sometimes need to prototype 2,3 or even more designs to workout the best option.

> writing code should be actually seen as part of the "thinking process".

Unfortunately most of the times leadership dont' see things this way. For them the tough work of thinking ends with architecture or another layer down. Then the engineers are responsible only for translating those designs into software just by typing away with a keyboard.

This leads to mismatch in delivery expectations between leadership and developers.


In my opinion, you shouldn't need to prototype all of these options .. but you will need to stress test any points where you have uncertainty.

The prototype should provide you with cast iron certainty that the final design can be implemented, to avoid wasting a huge amount of effort.


If you know so little that you have to make 3 prototypes to understand your problem, do you think designing it by any other process will make it possible make an accurate estimate?


(not so much of a reply, but more of my thoughts on the discussion in the replies)

I would say the topic is two-sided.

The first is when we do greenfield development (maybe, of some new part of an already existent software): the domain is not really well known and the direction of the future development is even less so. So, there is not much to think about: document what is known, make a rough layout of the system, and go coding. Too much investing in the design at the early stages may result in something (a) overcomplicated, (b) missing very important parts of the domain and thus irrelevant to the problem, (c) having nothing to do with how the software will evolve.

The second (and it is that side I think the post is about) is when we change some already working part. This time it pays hugely to ponder on how to best accommodate the change (and other information the request to make this change brings to our understanding of the domain) into the software before jumping to code. This way I've managed to reduce what was initially thought to take days (if not weeks) of coding to writing just a couple of lines or even renaming an input field in our UI. No amount of exploratory coding of the initial solution would result in such tremendous savings in development time and software complexity.


I agree with the spirit of "writing code" as part of the thinking process but I have to point out a few very dangerous pitfalls there.

first is the urge to write the whole prototype yourself from scratch. Not necessary, better avoided. You should just hack some things together, or pick something close to what you want off github. Idea is to have something working. I am a big proponent of implementing my ideas in a spreadsheet, then off to some code.

Second is modern software solutions are complex (think kubernetes, cloud provider quirks, authentication, sql/nosql, external apis) and easy to get lost in the minutiae, they shroud the original idea and takes strenuous effort to think clearly through the layers. To counter this, I keep a single project in my language of choice with the core business logic. No dependencies. everything else is stubbed or mocked. It runs in the IDE, on my laptop, offline with tests. This extra effort has paid off well to focus on core priorities and identify when bullshit tries to creep in. You could also use diagrams or whatever but working executable code is awesome to have.

third is to document my findings. Often I tend to tinker with the prototype way beyond the point of any meaningful threshold. its 2am before i know it and i kinda lose the lessons when i start the next day. Keeping a log in parallel with building the prototype helps me stay focussed, be clear in what my goals are and avoid repeating the same mistakes.

fourth is the rather controversial topic of estimation. When i have a running prototype, I tend to get excited and get too optimistic with my estimates. Rookie mistake. Always pad your estimates one order of magnitude higher. You still need to go through a lot of bs to get it into production. Remember that you will be working with a team, mostly idiots. Linus works alone.


Pen, paper, diagrams.


Xmind and draw.io


That quote is already a quote in the article. The article author himself writes:

> What is really happening?

> • Programmers were typing on and off all day. Those 30 minutes are to recreate the net result of all the work they wrote, un-wrote, edited, and reworked through the day. It is not all the effort they put in, it is only the residue of the effort.

So at least there the article agrees with you.


The way I read that is that only 10% of the work is writing out the actual implementation that you're sticking with. How you get there isn't as important. Ie someone might want to take notes on paper and draw graphs while others might want to type things out. That's all still planning.


Agree with your point. I think “developers that do 90% of their thinking before they touch the keyboard are really good” is the actual correct inference.

Plenty of good developers use the code as a notepad / scratch space to shape ideas, and that can also get the job done.


I'd like to second that, especially if combined with a process where a lot of code should get discarded before making it into the repository. Undoing and reconsidering initial ideas is crucial to any creative flow I've had.


Mathematics was invented by the human mind to minimize waste and maximize work productivity. By allowing reality mapping abstractions to take precedence over empirical falsifications of propositions.

And what most people can't do, such as keeping in their heads absolutely all the concepts of a theoretical computer software application, is an indication that real programmers exist on a higher elevation where information technology is literally second nature to them. To put it bluntly and succinctly.

For computer software development to be part of thinking, a more intimate fusion between man and machine needs to happen. Instead of the position that a programmer is a separate and autonomous entity from his fungible software.

The best programmers simulate machines in their heads, basically.


> The best programmers simulate machines in their heads, basically.

Yes, but they still suck at it.

That's why people create procedures like prototyping, test driven design, type driven design, paper-prototypes, API mocking, and etc.


The point is that there are no programmers that can simulate machines in their heads. These elite engineers only exist in theory. Because if they did exist, they would appear so alien and freakish to you that they would never be able to communicate their form of software development paradigms and patterns. These rare type of programmers only exist in the future, assuming we're on a timeline toward such a singularity. And we're not, except for some exceptions that cultivate a colony away from what is commonly called the tech industry.

EDIT: Unless you're saying that SOLID, VIPER, TDD, etc. are already alien invaders from a perfect world and only good and skilled humans can adhere to the rules with uncanny accuracy?


String theory was invented by the human mind to minimize productivity and maximize nerd sniping. Pure math.


It's not even that, it's also the iceberg effect of all our personal knowledge-bases; I'd rather experiment on my machine figuring out how I want to do something rather than read endless documentation.

Documentation is a good starter/unblocker but once I've got the basics down then I'll run wild in a REPL or something figuring out exactly how I want to do something.

Pure thinking planning is a good way to end up with all sorts of things that weren't factored in, imo. We should always encourage play during the planning process.


I don't think the quote suggests that a programmer would mentally design a whole system before writing any code. As programmers, we are used to thinking in problems as steps needing resolution and that's exactly the 90% there. When you're quickly prototyping to see what fits better as a solution to the problem you're facing, you must have already thought what are the requirements, what are the constraints, what would be a reasonable API given your use case. Poking around until you find a reasonable path forward means you have already defined which way is forwards.


I don’t like that line at all.

Personally, I think good developers get characters on to the screen and update as needed.

One problem with so much upfront work is how missing even a tiny thing can blow it all up, and it is really easy to miss things.


> writing code should be actually seen as part of the "thinking process"

I agree. This is how I work as well. I start coding as quickly as possible, and I fully plan on throwing away my first draft and starting again. I do a fair bit of thinking beforehand, of course, but I also want to get the hidden "gotchas" brought to light as quickly as possible, and often the fastest way is to just start writing the code.


I don't get great results myself from diving into coding immediately. I personally get better results if I have some storyboards, workflows, ins/outs, etc. identified and worked-over first.

But when I do get down to writing code, it very much takes an evolutionary path. Typically the first thing I start writing is bloated, inefficient, or otherwise suboptimal in a variety of ways. Mostly it's to get the relationships between concepts established and start modeling the process.

Once I have something that starts looking like it'll work, then I sort of take the sculptor's approach and start taking away everything that isn't the program I want.

So yeah, a fair amount of planning, but the first draft of anything I write, really, code or otherwise, is something for me, myself, to respond to. Then I keep working it over until it wouldn't annoy me if I was someone else picking it up cold.


There's more than one way to do it.

How I work:

- First I make a very general design of modules, taking into account how they are going to communicate with each other, all based in experience with previous systems.

- I foresee problematic parts, usually integration points, and write simple programs to validate assumptions and test performance.

- I write a skeleton for the main program with a dumbed-down GUI.

From that point on, I develop each module and now, yes, there's a lot of thinking in advance.


I completely agree with you. This article is on the right track but it completely ignore the importance of exploratory programming to guide that thinking process.


I also find it critical to start writing immediately. Just my thoughts and research results. I also like to attempt to write code too early. I'll get blocked very quickly or realize what I'm writing won't work, and it brings the blockage to the forefront of my mind. If I don't try to write code there will be some competing theories in my mind and they won't be prioritized correctly.


> While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time.

Indeed, and this limitation specifically is why I dislike VIPER: the design pattern itself was taking up too many of my attention slots, leaving less available for the actual code.

(I think I completely failed to convince anyone that this was important).


are you talking about the modal text editor for emacs



Correct, that.

The most I've done with emacs (and vim) is following a google search result for "how do I exit …"


surely. thanks


Germans say "probieren geht über studieren" which means, rather try it than think it too much.


ditto. My coworkers will sit down at the whiteboard and start making ERDs, and I'll just start writing a DB migration and sketching up the models. They think I'm crazy, because I have to do rollbacks and recreate my table a few times as I think of more things, and I wind up deleting some code that I already wrote and technically worked. Who cares? Time-wise, it comes out the same in the end, and I find and solve more hidden surprises by experimenting than they do by planning what they're going to do once they're finally ready to type.

I think it's just two ways of doing the same thing.


Same, and since we are on the topic of measuring developer productivity; usually my bad-but-kinda-working prototype is not only much faster to develop, it also has more lines of code, maximizing my measurable productivity!


I'd take the phrase with a grain of salt. What's certain true is that you can't just type your way to a solution.

Whether you plan before pen meets paper or plan while noodling is a matter of taste.


Also not every task requires deep thought. If you are writing some CRUD, it is usually not going to be all that much thinking, but more touching the keyboard.


I wish I had thought a little more about this CRUD.

I hand wrote HTML forms and that was not a great plan. I made a dialog generator class in about a half hour that replaced dozens of CreatePermission.html type garbage I wrote a decade ago.


This is what I do I right off the bat wrote down an line or two about what I need to do.

Then I break that down into small and smaller steps.

Then I hack it together to make it work.

Then I refactor to make it not a mess.


I used to think a lot before coding.

Then I learned TDD, and now I can discover the design while I code. It's a huge improvement for me!


I think it was Feynman who said something to the effect of “writing is thinking”.


yeah, i just sketched out some code ideas on paper over a few days, checked and rechecked them to make sure they were right, and then after i wrote the code on the computer tonight, it was full of bugs that i took hours and hours to figure out anyway. debugging output, stepping through in the debugger, randomly banging on shit to see what would happen because i was out of ideas. i would have asked coworkers but i'm fresh out of those at the moment

i am not smart enough this week to debug a raytracer on paper before typing it in, if i ever was

things like hypothesis can make a computer very powerful for checking out ideas


I've no coworkers either, and over time both I and my code suffer for it. Some say thinking is at it's essence a social endeavor.


I should say rather, Thinking.


"Most people" are not "Really good developers".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: