Great article. I just want to comment on this quote from the article:
"Really good developers do 90% or more of the work before they ever touch the keyboard;"
While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.
My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.
In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.
This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.
In yet different words: writing code should be actually seen as part of the "thinking process".
I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.
But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.
In the end, it's a bit of an art, coming up with the final working version.
Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.
This is special because most real world systems has a lot more dependencies. That’s when experimentation is required. Because one cannot know all relevant API’s beforehand and their behaviors. Therefore the only way is to do it and find out.
Algorithms are in essence mathematical problems, therefore is abstract and should be able to be solved in the head or use pen and paper.
Reality is that most programming problems are not algorithms but connecting and translating between systems. And these systems are like blackboxes that require exploration.
> Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.
This type of developer tends to prefer building software like this. There is a whole crowd of hardcore C/C++/Rust devs who eschew taking dependencies in favour of writing everything themselves (and they mostly nerd-snipe themselves with the excessive NIH syndrome, like Jonathon Blow off writing Powerpoint[1]...)
Torvalds seems to be mostly a special case in that he found a sufficiently low-level niche where extreme NIH is not a handicap.
It's really easy to remember the semantics of C. At least if you punt a bit on the parts that savage you in the name of UB. You know what libc has in it, give or take, because it's tiny.
Therefore if you walk through the woodland thinking about a program to write in C there is exactly zero interrupt to check docs to see how some dependency might behave. There is no uncertainty over what it can do, or what things will cost you writing another language to do reasonably.
Further, when you come to write it, there's friction when you think "oh, I want a trie here", but that's of a very different nature to "my python dependency segfaults sometimes".
It's probably not a path to maximum output. From the programming is basically a computer game perspective it has a lot going for it.
Lua is basically the same. An underappreciated feature of a language is never, ever having to look up the docs to know how to express your idea.
ie closed off systems that don't interact with external systems.
and these are the type of coders that also favor types.
whereas if you do the 'informational' type programs as described by Rich hickey -- ie interact with outside systems a lot. you will find a lot of dependencies, and types get in the way
I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.
This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?
I'm more than half the time figuring out the environment. Just as you learn a new language by doing the exercises, I'm learning a bunch of stuff while I try to port our iptables semantics o firewalld:
[a] gitlab CI/CD instead of Jenkins [b] getting firewalld (requires systemd) running in a container [c] the ansible firewalld module doesn't support --direct required for destination filtering [d] inventing a test suite for firewall rules, since the prebuilt I've found would involve weeks of yak shaving to get operating. So I'm simultaneously learning about four environments/languages at once - and this is typical for the kind of project I get assigned. There's a *lot* of exploratory coding happening. I didn't choose this stuff - it's part of the new requirements. I try for simple first, and often the tools don't support simple.
This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.
At the microlevel (where we pass actual data objects between functions), the difference in the amount of work required between designing data layout "on paper" and "in code" is often negligible and not in favor of "paper", because some important interactions can sneak out of sight.
I do data flow diagrams a lot (to understand the domain, figure out dependencies, and draw rough component and procedure boundaries) but leave the details of data formats and APIs to exploratory coding. It still makes me change the diagrams, because I've missed something.
The real world bank processes themselves are significantly more complicated than for any one person to hold it in their head. Simplification is important but only until the point it still completes 100% of the required functionality.
Code also functions as documentation for the actual process. In many cases “whatever the software do” is the process itself.
If you can do that, sure. Architecting a clear design beforehand isn't always feasible though, especially when you're doing a thing for the first time or you're exploring what works and what doesn't, like in game programming, for example. And then, there are also the various levels at which designing and implementation takes place.
In the end, I find my mental picture is still the most important.
And when that fades after a while, or for code written by someone else, then I just have to go read the code. Though it may exist, so far I haven't found a way that's obviously better.
Some thing I've tried (besides commenting code) are doing diagrams (they lose sync over time) and using AI assistants to explain code (not very useful yet). I didn't feel they made the difference, but we have to keep learning in this job.
Of course it can be helpful to do some prototyping to see which parts still need design improvements and to understand the problem space better. That's part of coming up with the good design and architecture, it takes work!
I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.
Diagrams are what I tend to use as well, my background is Engineering (the non software kind) for solving engineering problems one of the first thing we are taught to do at uni is to sketch out the problem and I have somewhat carried that habit over when I need to write a computer program.
I map out on paper the logical steps my code needs to follow a bit like a flow chart tracking the change in states.
When I write code I'll create like a skeleton with placeholder functions I think I'll need as stubs and fill them out as I go, I'm not wedded to the design sometimes I'll remove/ replace etc whole sections as I get further in but it helps me think about it if I have the whole skeleton "on the page"
I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.
I started my career with Clearcase (ick) and added CVS for personal projects shortly after. CVS always kind of sucked, even compared with Clearcase. Subversion was a massive improvement, and I was pretty happy with it for a long time. I resisted moving from Subversion to Git for a while but eventually caved like nearly everyone else. After learning it sufficiently, I now enjoy Git, and I think the model it uses is better in nearly every way than Subversion.
But the point of the parent of your post is correct, in my opinion. The Git interface sucks. Subversion's was much more consistent, and therefore better. Imagine how much better Git could be if it had had a little more thought and consistency put into the interface.
I thought it was pretty universally agreed that the Git interface sucks. I'm surprised to see someone arguing otherwise.
Subversion was a major improvement over CVS, in that it actually had sane branching and atomic commits. (In CVS, if you commit multiple files, they're not actually committed in a single action - they're individual file-level transactions that are generally grouped together based on commit message and similar (but not identical!) timestamps.) Some weirdness like using paths for branching, but that's not a big deal.
I actually migrated my company from CVS to SVN in part so we could do branchy development effectively, and also so I personally could use git-svn to interact with the repo. We ended up eventually moving Mercurial since Git didn't have a good Windows story at the time. Mercurial and Git are pretty much equivalent in my experience, just they decided to give things confusing names. (git fetch / pull and hg fetch / pull have their meanings swapped)
At least one person (me) thinks that: git interface is good enough as is (function>form here), regexps are not too terse - that’s the whole point of them.
It's really hard to overstate how much of a sea change git was.
It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.
And all that because the company owning the commercial version control system they had been using free of charge until that point got greedy, and wanted them to start paying for its use.
Their greed literally killed their own business model, and brought us a better versioning system. Bless their greedy heart.
What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.
Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.
> I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.
That sounds a bit weird. As I remember, linux developers were using semi-closed system called BitLocker (or something like that) for many years. For some reason the open source systems at that time weren't sufficient. The problems with BitLocker were constantly discussed, so it might be that Linus was thinking about the problems for years before he wrote git.
Well, if you want to take what I said literally, it seems I need to explain..
My point is, he thought about it for some time before he was free to start the work, then he laid down the basics in less than a week, so he was able to start using Git to build Git, polished it for a while and then turned it over.
Here's an interview with the man himself telling the story 10 years later, a very interesting read:
>> …and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.
How very biblical. “And Torwalds saw everything that he had made, and behold, it was very good. And there was evening, and there was morning—the sixth day.”
I do the same, iterate. When I am happy with the code I imagine I've probably rewritten it roughly three times.
Now I could have spent that time "whiteboarding" and it's possible I would have come close to the same solution. But whiteboarding in my mind is still guessing, anticipating - coding is of course real.
I think that as you gain experience as a programmer you are able to intuit the right way to begin to code a problem, the iterating is still there but more incremental.
I think once you are an experienced programmer, beyond being able to break down the target state into chunks of task, you are able to intuit pitfalls/blockers within those chunks better than less experienced programmers.
An experienced programmer is also more cognizant of the importance of architectural decisions, hitting the balance between keeping things simple vs abstractions and the balance between making things flexible vs YAGNI.
Once those important bits are taken care of, rest of it is more or less personal style.
Yeah, while I understand rewrite-based iterations, and have certainly done them before, they've gotten less and less common over time because I'm thinking about projects at higher levels than I used to. The final design is more and more often what I already have in my head before I start.
I never hold all the code designed in my head at once, but it's more like multiple linked thoughts. One idea for the overall structure composed of multiple smaller pieces, then the smaller pieces each have their own design that I can individually hold in my head. Often recursively down, depending on how big the given project is and how much it naturally breaks down. There's certainly unknowns or bugs as I go, but it's usually more like handling an edge case than anything wrong with the design that ends in a rewrite.
Nasrudin (or Nasreddin)[0] is an apocryphal Sufi priest, who is sort of a "collection bin" for wise and witty sayings. Great stories. Lots of humor, and lots of wisdom.
One of my "go-tos" from him, is the Smoke Seller[1]. I think that story applies to the Tech Scene.
I first heard the GC quote as attributed to Will Rogers, then, to Rita Mae Brown.
Yeah, the same. I rewrite code until I'm happy with it. When starting new program, it might cause lots of time wasted because I might need to spend weeks rewriting and re-tossing everything until I feel I got it good enough. Tried to do it faster, but I just can't. The only way is to write a working code and reflect on it.
My only optimization of this process is to use Java and not just throw out everything, but keep refactoring. Idea allows for very quick and safe refactoring cycles, so I can iterate on overall architecture or any selected components.
I really envy on people who can get it right first time. I just can't, despite having 20 years of programming under my seat. And when time is tight and I need to accept obviously bad design, that what makes me burning out.
Good design evolves from knowing the problem space.
Until you've explored it you don't know it.
I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.
And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.
The secret to designing entire applications in your head is to be intimately familiar with the underlying platform and gotcha's of the technology you're using. And the only way to learn those is to spend a lot of time in hands-on coding and active study. It also implies that you're using the same technology stack over and over and over again instead of pushing yourself into new areas. There's nothing wrong with this; I actually prefer sticking to the same tech stack, so I can focus on the problem itself; but I would note that the kind of 'great developer' in view here is probably fairly one-dimensional with respect to the tools they use.
I think this is probably a lot of the value of YAGNI.
The more crap you add the harder it is to fix bad architecture.
And the crap is often stuff that would be trivial to add if the bad architecture weren't there, so if you fix that you can add the feature when you need it in a week.
I think that's probably part of it; but on a really simple level with YAGNI you're not expending effort on something that isn't needed which reduces cost.
What I try to do is think about the classes of functionality that might be needed in the future. How could I build X feature in a years time?
Right, I always thought this is what TDD is used for, very often I design my code in tests and let it kind of guide my implementation.
I kind of imagine what the end result should be in my head (given value A and B, these rows should be X and Y), then write the tests in what I _think_ would be a good api for my system and go from there.
The end result is that my code is testable by default and I get to go through multiple cycles on Red -> Green -> Refactor until I end up being happy with.
Sometimes, when I feel like I know the basic domain and can think of something reasonable that I could use as a "north star" test end-point, I work like this. Think anything that could be a unit test, or even a simple functional test. But when I don't know the domain, or if it's some complex system that I have to come up from scratch, writing tests first often makes no sense at all. Then I'd usually start literally drawing on paper, or typing descriptions of what it might do, then just start coding, and the tests come much, much later.
Right tool for the job, as always! The only thing I can't stand is blind zealotry to one way or the other. I've had to work with some real TDD zealots in the past, and long story short, I won't work with people like that again.
TDD comes up with some really novel designs sometimes.
Like, I expect it should look one way but after I'm done with a few TDD cycles I'm at a state that's either hard to get there or unnecessary.
I think this is why some people don't like TDD much, sometimes you have to let go of your ideas, or if you're stuck to them, you need to go back much earlier and try again.
I kind of like this though, makes it kind of like you're following a choose your own adventure book.
I prefer to write an initial implementation, and then in the testing process figure out which interfaces simplify my tests, and then I refactor the implementation to use those interfaces. Generally, this avoids unnecessary abstraction, as the interfaces for testing tend to be the same ones you might need for extensibility.
I'm not sure why folks think they need to hold all this in their head. For me, at least, the "think about the problem stage" involves a lot of:
* Scribbling out some basic design notes.
* Figuring out the bits I'm not completely sure about
* Maybe coding up some 'prototype' code to prove out the bits I'm less sure about
* Repeat until I think I know what I'm doing/validated my assumptions
* Put together a 'more formal' design to share with the team. Sometimes my coworkers think of things that I hadn't, so it's not quite a formality. :)
* Code the thing up.
By the time I get to the "code it up" stage, I've probably put in most of the work. I've written things down as design, but nothing hyperdetailed. Just what I need to remind myself (and my team) of the decisions I've made, what the approach is, the rejected ideas, and what needs doing. Some projects need a fair bit on paper, some don't.
My pet theory is that developers exist on a spectrum between "planner" and "prototyper" - one extreme spends a lot of time thinking about a solution before putting it into code - hopefully hitting the goal on first attempt. The other iterates towards it. Both are good to have on the team.
I could not agree more, it's rare to write a program where you know all the dependencies, libraries you will use and the overall effect to other parts of the program by heart. So, gradual design process is best.
I would point out, though, that that part also touched understanding requirements, which is many times a very difficult process. We might have a technical requirement conjured, by someone less knowledgeable about the inner workings, from a customer requirement and the resolution of the technical requirement may not even closely address the end-users' use-case. So, a lot of time also goes into understanding what it is that the end-users actually need.
But this also makes it difficult to give accurate estimates because you sometimes need to prototype 2,3 or even more designs to workout the
best option.
> writing code should be actually seen as part of the "thinking process".
Unfortunately most of the times leadership dont' see things this way. For them the tough work of thinking ends with architecture or another layer down. Then the engineers are responsible only for translating those designs into software just by typing away with a keyboard.
This leads to mismatch in delivery expectations between leadership and developers.
If you know so little that you have to make 3 prototypes to understand your problem, do you think designing it by any other process will make it possible make an accurate estimate?
(not so much of a reply, but more of my thoughts on the discussion in the replies)
I would say the topic is two-sided.
The first is when we do greenfield development (maybe, of some new part of an already existent software): the domain is not really well known and the direction of the future development is even less so. So, there is not much to think about: document what is known, make a rough layout of the system, and go coding. Too much investing in the design at the early stages may result in something (a) overcomplicated, (b) missing very important parts of the domain and thus irrelevant to the problem, (c) having nothing to do with how the software will evolve.
The second (and it is that side I think the post is about) is when we change some already working part. This time it pays hugely to ponder on how to best accommodate the change (and other information the request to make this change brings to our understanding of the domain) into the software before jumping to code. This way I've managed to reduce what was initially thought to take days (if not weeks) of coding to writing just a couple of lines or even renaming an input field in our UI. No amount of exploratory coding of the initial solution would result in such tremendous savings in development time and software complexity.
I agree with the spirit of "writing code" as part of the thinking process but I have to point out a few very dangerous pitfalls there.
first is the urge to write the whole prototype yourself from scratch. Not necessary, better avoided. You should just hack some things together, or pick something close to what you want off github. Idea is to have something working. I am a big proponent of implementing my ideas in a spreadsheet, then off to some code.
Second is modern software solutions are complex (think kubernetes, cloud provider quirks, authentication, sql/nosql, external apis) and easy to get lost in the minutiae, they shroud the original idea and takes strenuous effort to think clearly through the layers. To counter this, I keep a single project in my language of choice with the core business logic. No dependencies. everything else is stubbed or mocked. It runs in the IDE, on my laptop, offline with tests. This extra effort has paid off well to focus on core priorities and identify when bullshit tries to creep in. You could also use diagrams or whatever but working executable code is awesome to have.
third is to document my findings. Often I tend to tinker with the prototype way beyond the point of any meaningful threshold. its 2am before i know it and i kinda lose the lessons when i start the next day. Keeping a log in parallel with building the prototype helps me stay focussed, be clear in what my goals are and avoid repeating the same mistakes.
fourth is the rather controversial topic of estimation. When i have a running prototype, I tend to get excited and get too optimistic with my estimates. Rookie mistake. Always pad your estimates one order of magnitude higher. You still need to go through a lot of bs to get it into production. Remember that you will be working with a team, mostly idiots. Linus works alone.
That quote is already a quote in the article. The article author himself writes:
> What is really happening?
> • Programmers were typing on and off all day. Those 30 minutes are to recreate the net result of all the work they wrote, un-wrote, edited, and reworked through the day. It is not all the effort they put in, it is only the residue of the effort.
The way I read that is that only 10% of the work is writing out the actual implementation that you're sticking with. How you get there isn't as important. Ie someone might want to take notes on paper and draw graphs while others might want to type things out. That's all still planning.
Agree with your point. I think “developers that do 90% of their thinking before they touch the keyboard are really good” is the actual correct inference.
Plenty of good developers use the code as a notepad / scratch space to shape ideas, and that can also get the job done.
I'd like to second that, especially if combined with a process where a lot of code should get discarded before making it into the repository. Undoing and reconsidering initial ideas is crucial to any creative flow I've had.
Mathematics was invented by the human mind to minimize waste and maximize work productivity. By allowing reality mapping abstractions to take precedence over empirical falsifications of propositions.
And what most people can't do, such as keeping in their heads absolutely all the concepts of a theoretical computer software application, is an indication that real programmers exist on a higher elevation where information technology is literally second nature to them. To put it bluntly and succinctly.
For computer software development to be part of thinking, a more intimate fusion between man and machine needs to happen. Instead of the position that a programmer is a separate and autonomous entity from his fungible software.
The best programmers simulate machines in their heads, basically.
The point is that there are no programmers that can simulate machines in their heads. These elite engineers only exist in theory. Because if they did exist, they would appear so alien and freakish to you that they would never be able to communicate their form of software development paradigms and patterns. These rare type of programmers only exist in the future, assuming we're on a timeline toward such a singularity. And we're not, except for some exceptions that cultivate a colony away from what is commonly called the tech industry.
EDIT: Unless you're saying that SOLID, VIPER, TDD, etc. are already alien invaders from a perfect world and only good and skilled humans can adhere to the rules with uncanny accuracy?
It's not even that, it's also the iceberg effect of all our personal knowledge-bases; I'd rather experiment on my machine figuring out how I want to do something rather than read endless documentation.
Documentation is a good starter/unblocker but once I've got the basics down then I'll run wild in a REPL or something figuring out exactly how I want to do something.
Pure thinking planning is a good way to end up with all sorts of things that weren't factored in, imo. We should always encourage play during the planning process.
I don't think the quote suggests that a programmer would mentally design a whole system before writing any code. As programmers, we are used to thinking in problems as steps needing resolution and that's exactly the 90% there.
When you're quickly prototyping to see what fits better as a solution to the problem you're facing, you must have already thought what are the requirements, what are the constraints, what would be a reasonable API given your use case. Poking around until you find a reasonable path forward means you have already defined which way is forwards.
> writing code should be actually seen as part of the "thinking process"
I agree. This is how I work as well. I start coding as quickly as possible, and I fully plan on throwing away my first draft and starting again. I do a fair bit of thinking beforehand, of course, but I also want to get the hidden "gotchas" brought to light as quickly as possible, and often the fastest way is to just start writing the code.
I don't get great results myself from diving into coding immediately. I personally get better results if I have some storyboards, workflows, ins/outs, etc. identified and worked-over first.
But when I do get down to writing code, it very much takes an evolutionary path. Typically the first thing I start writing is bloated, inefficient, or otherwise suboptimal in a variety of ways. Mostly it's to get the relationships between concepts established and start modeling the process.
Once I have something that starts looking like it'll work, then I sort of take the sculptor's approach and start taking away everything that isn't the program I want.
So yeah, a fair amount of planning, but the first draft of anything I write, really, code or otherwise, is something for me, myself, to respond to. Then I keep working it over until it wouldn't annoy me if I was someone else picking it up cold.
- First I make a very general design of modules, taking into account how they are going to communicate with each other, all based in experience with previous systems.
- I foresee problematic parts, usually integration points, and write simple programs to validate assumptions and test performance.
- I write a skeleton for the main program with a dumbed-down GUI.
From that point on, I develop each module and now, yes, there's a lot of thinking in advance.
I completely agree with you. This article is on the right track but it completely ignore the importance of exploratory programming to guide that thinking process.
I also find it critical to start writing immediately. Just my thoughts and research results. I also like to attempt to write code too early. I'll get blocked very quickly or realize what I'm writing won't work, and it brings the blockage to the forefront of my mind. If I don't try to write code there will be some competing theories in my mind and they won't be prioritized correctly.
> While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time.
Indeed, and this limitation specifically is why I dislike VIPER: the design pattern itself was taking up too many of my attention slots, leaving less available for the actual code.
(I think I completely failed to convince anyone that this was important).
ditto. My coworkers will sit down at the whiteboard and start making ERDs, and I'll just start writing a DB migration and sketching up the models. They think I'm crazy, because I have to do rollbacks and recreate my table a few times as I think of more things, and I wind up deleting some code that I already wrote and technically worked. Who cares? Time-wise, it comes out the same in the end, and I find and solve more hidden surprises by experimenting than they do by planning what they're going to do once they're finally ready to type.
I think it's just two ways of doing the same thing.
Same, and since we are on the topic of measuring developer productivity; usually my bad-but-kinda-working prototype is not only much faster to develop, it also has more lines of code, maximizing my measurable productivity!
Also not every task requires deep thought. If you are writing some CRUD, it is usually not going to be all that much thinking, but more touching the keyboard.
I wish I had thought a little more about this CRUD.
I hand wrote HTML forms and that was not a great plan. I made a dialog generator class in about a half hour that replaced dozens of CreatePermission.html type garbage I wrote a decade ago.
yeah, i just sketched out some code ideas on paper over a few days, checked and rechecked them to make sure they were right, and then after i wrote the code on the computer tonight, it was full of bugs that i took hours and hours to figure out anyway. debugging output, stepping through in the debugger, randomly banging on shit to see what would happen because i was out of ideas. i would have asked coworkers but i'm fresh out of those at the moment
i am not smart enough this week to debug a raytracer on paper before typing it in, if i ever was
things like hypothesis can make a computer very powerful for checking out ideas
Great article!
I've posted it in other comments before, but it's worth repeating:
The best explanation I've seen is in the book "The Secret Life of Programs" by Jonathan E. Steinhart. I'll quote that paragraph verbatim:
---
Computer programming is a two-step process:
1. Understand the universe.
2. Explain it to a three-year-old.
What does this mean? Well, you can't write computer programs to do things that you yourself don't understand. For example, you can't write a spellchecker if you don't know the rules for spelling, and you can't write a good action video game if you don't know physics. So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.
The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do. This rigidity in children is really obvious when they're about three years old. Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question. The problem is, she doesn't understand that you're really asking her to put her shoes on so that you both can go somewhere. Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.
Not really a book, and not sure how similar it would be, but you might enjoy this work from 2001 - "Programmers' stone". Introduction into "mapping" vs "packing" definitely influenced my own understanding of programming vs thinking back then.
Edit: Aften writing this long nitpicky comment, I have though of a much shorter and simpler point I want to make: Programming is mostly thinking and there are many ways to do the work of thinking. Different people and problems call for different ways of thinking and learning to think/program in different ways will give more tools to choose from. Thus I don't like arguments that there is one right way that programming happens or should happen.
I'm sorry, but yout entire comment reads like a list of platitudes about programming that don't actually match reality.
> Well, you can't write computer programs to do things that you yourself don't understand.
Not true. There are many times where writing software to do a thing is how I come to understand how that thing actually works.
Additionally, while an understanding of physics helps with modeling physics, much of that physics modeling is done to implement video games and absolute fidelity to reality is not the goal. There is often an exploration of the model space to find the right balance of fidelity, user experience and challenge.
Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.
> So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.
I'm all about generalist and autodidacts, but becomming one isn't a necessary first step to being a good programmer.
> The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do.
Umm... children have "rigid" world views? Do you know any children?
> Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question.
Oh, you don't mean rigid, you mean they can't always infer social subtexts.
> Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.
Computes make inferrences all the time. Deriving logical conclusions from known facts is absolutely something computers can be programmed to do and is arguably one of their main uses cases.
I have spent time explaining to things to children of various ages, including 3 year olds, and find the experience absolutely nothing like programming a computer.
Are you replying to me or to the author of the quote? :)
> Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.
Absolutely, explaining something to the child also can be exploratory cognitive tool.
I would say very young children up until they acquire concepts like a theory of mind, cause and effect happening outside of their field of observation, and so on, are pretty rigid in many ways like computers. It's a valuable insight.
Or at least they don't make mistakes in exceptionally novel and unusual ways until they're a bit older.
> I would say very young children up until they acquire concepts like a theory of mind, cause and effect happening outside of their field of observation, and so on, are pretty rigid in many ways like computers. It's a valuable insight.
I don't see any overlapp between the two skill sets, since you do I'd be curious for examples of where you do see overlap.
I'm confident enough to tout this number as effectively true, though I should mention that no company I work with has so far been willing to delete a whole day's work to prove or disprove this experiment yet.
Long ago when I was much more tolerant, I had a boss that would review all code changes every night and delete anything he didn't like. This same boss also believed that version control was overcomplicated and decided the company should standardize on remote access to a network drive at his house.
The effect of this was that I'd occasionally come in the next morning to find that my previous day's work had been deleted. Before I eventually installed an illicit copy of SVN, I got very good at recreating the previous day's work. Rarely took more than an hour, including testing all the edge cases.
I don't have a big sample size, but 2/2 of my first embedded jobs both used network shares and copy+paste to version their code. Because I had kind-of PTSD from the first job, I right off asked the boss on the second job if they had a git repository somewhere. He thought that git is the same as Github and told me they don't want their code to be public.
When they were bought of by some bigger company, we got access to their intranet. I digged through that and found a gitlab instance. So then I just versioned my own code (which I was working on mostly on my own), documented all of it on there, even installed a gitlab runner and had a step-by-step documentary on how to get my code working. When they kicked me out (because I was kind of an asshole, I assume), they asked me to hand over my code. I showed them all of what I did and told them how to reproduce it. After that the boss was kinda impressed and thanked me for my work. Maybe I had a little positive impact on a shitty job by being an asshole and doing stuff the way that I thought would be the right way to do it.
Edit: Oh, before I found that gitlab instance I just initialized raw git repositories on their network share and pushed everything to that
Well, I was severely depressed and was on sick leave for quite some time, but when I was there I just did my job as best as I can. I am not an inherent asshole. I just get triggered hard when some things don't work out (no initial training, barely any documentation, people being arrogant). I just want to be better than this myself.
There are many reasons. First a manager is not a peer but brings in a sense of authority into the mix so the discussions will not be honest. Manager's inputs have a sense of finality and people will hesitate to comment or override them even when they are questionable.
There are human elements too. Even if someone has honest inputs, any (monetary or otherwise) rewards or lack of them will be attributed to those inputs (or lack of them).
Overall, it just encourages bad behaviours among the team and invites trouble.
These should not happen in an ideal world but as we are dealing with people things will be far from ideal.
Anyone who has made seious use of Microsoft Office products in the 00's and 10's knows these things to be true (or they reflexively click save every 5-10 minutes).
Probably a bit of both, but hindsight helped. It doesn't usually end up exactly the same though. Regardless, whatever I wrote worked well enough that it outlived the company. A former client running it reached out to have it modified last year.
This is a good article to send to non-programmers. Just as programmers need domain knowledge, those who are trying to get something out of programmers need to understand a bit about it.
I think I recognise that tiny diffs that I might commit can be the ones that take hours to create because of the debugging or design or learning involved. It's all so easy to be unimpressed by the quantity of output and having something explained to you is quite different from bashing your head against a brick wall for hours trying to work it out yourself.
This. The smallest pieces of code I’ve put out were usually by far the most time consuming, most impactful and most satisfying after you “get it”. One line commits that improve performance by 100x but took days to find, alongside having to explain during syncs why a ticket is not moving.
This is why domain knowledge is key. I work in finance, I've sat on trading desks looking at various exchanges, writing code to implement this or that strategy.
You can't think about what the computer should do if you don't know what the business should do.
From this perspective, it might make sense to train coders a bit like how we train translators. For example, I have a friend who is a translator. She speaks a bunch of languages, it's very impressive. She knows the grammar, idioms, and so on of a wide number of languages, and can pick up new ones like how you or I can pick up a new coding language.
But she also spent a significant amount of time learning about the pharmaceutical industry. Stuff about how that business works, what kinds of things they do, different things that interface with translation. So now she works translating medical documents.
Lawyers and accountants are another profession where you have a language gap. What I mean is, when you become a professional, you learn the language of your profession, and you learn how to talk in terms of the law, or accounting, or software. What I've always found is that the good professionals are the ones who can give you answers not in terms of their professional language, but in terms of business.
Particularly with lawyers, the ones who are less good will tell you every possible outcome, in legalese, leaving you to make a decision about which button to press. The good lawyers will say "yes, there's a bunch of minor things that could happen, but in practice every client in your positions does X, because they all have this business goal".
---
As for his thought experiment, I recall a case from my first trading job. We had a trader who'd created a VBA module in Excel. It did some process for looking through stocks for targets to trade. No version control, just saved file on disk.
Our new recruit lands on the desk, and one day within a couple of weeks, he somehow deletes the whole VBA module and saves it. All gone, no backup, and IT can't do anything either.
Our trader colleague goes red. He calms down, but what can you do? You should have backups, and what are you doing with VBA anyway?
He sits down and types out the whole thing, as if he were a terminal screen from the 80s printing each character after the next.
Very true. There’s a huge difference developing in a well known vs. new domain. My mantra is that you have to first be experienced in a domain to be able to craft a good solution.
Right now I am pouring most of my time in a fairly new domain, just to get an experience. I sit next to the domain experts (my decision) to quickly accumulate the needed knowledge.
> This is why domain knowledge is key.
> Lawyers and accountants are another profession where you have a language gap.
I fully agree with you. However, my experience as a software engineer with a CPA is that, generally speaking, companies do not care too greatly about that domain knowledge. They’d rather have a software engineer with 15 years working in accounting-related software than someone with my background or similar and then stick them into a room to chat with an accountant for 30 minutes.
In the comment thread, I keep seeing prescriptions over and over for the one way that programming should work.
Computer programming is an incredibly broad discipline that covers such a broad range of types of work. I think it is incredibly hard to make generalizations that actually apply to the whole breadth of what computer programming encompases.
Rather than trying learn or teach one perfect one single methodology that applies accross every sub field of programming, I think that one should aim to build a toolbag of approaches and methodologies along with an understanding where they tend to work well.
Yeah but in my country all companies have a non-compete clause which makes it completely useless for me to learn any domain-specific knowledge because I won't be able to transfer it to my next job if current employer fires me. Therefore I focus on general programming skills because these are transferable across industries.
The transferable skill is learning and getting on top of the business, then translating that to code. Of course you can't transfer the actual business rules; every business is different. You just get better and better at asking the right questions. Or you just stick with a company for a long time. There are many businesses that can't be picked up in a few weeks. Maybe a few years.
In some countries (Austria), the company that you have a non-compete clause with should pay you a salary if you can’t reasonably be employed due to it. So it is not enforced most of the time.
This is laid out pretty early on by Bjourne in his PPP book[0],
> We do not assume that you — our reader — want to become a professional programmer and spend the rest of your working life writing code. Even the best programmers — especially the best programmers — spend most of their time not writing
code. Understanding problems takes serious time and often requires significant
intellectual effort. That intellectual challenge is what many programmers refer to
when they say that programming is interesting.
Picked up the new edition[1] as it was on the front page recently[2].
I think this is mostly right, but my biggest problem is that it feels like we spend time arguing the same things over and over. Which DB to use, which language is best, nulls or not in code and in DB, API formatting, log formatting, etc.
These aren't particularly interesting, and sure it's good to revisit them time and again, but these are the types of time sinks I find myself in in the last 3 companies I've worked for that feel like they should be mostly solved.
In fact, a company with a strong mindset, even if questionable, is probably way more productive. If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.
> “If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.”
Facebook decided to stick with PHP and MySQL from their early days rather than rewrite, and they’re still today on a stack derived from the original one.
It was the right decision IMO. They prioritized product velocity and trusted that issues with the stack could be resolved with money when the time comes.
And that’s what they’ve done by any metric. While nominally a PHP family language, Meta’s Hack and its associated homegrown ecosystem provides one of the best developer experiences on the planet, and has scaled up to three billion active users.
I disagree! These decisions are fundamental in the engineering process.
Should I use steel, concrete or wood to build this bridge?
The mindless coding part starts one year later when you found that your mongoDB does not do joins, and you start implementing this as an extra layer in the client side.
What you're referring to is politics. Different people have different preferences, often because they're more familiar with one of them, or for other possibly good reasons. Somehow you have to decide who wins.
The hardest part is finding out what _not_ to code, either before (design) or after (learn from prototype or the previous iteration) having written some.
“Programming is mostly thinking” is one of these things we tell ourselves like it is some deep truth but it’s the most unproductive of observations.
Programming is thinking in the same exact way all knowledge work is thinking:
- Design in all it’s forms is mostly thinking
- Accounting is mostly thinking
- Management in general is mostly thinking
The meaningful difference is not the thinking, it’s what are you thinking about.
Your manager needs to “debug” people-problems, so they need lots of time with people (i.e. meetings).
You are debugging computer problems, so you need lots of time with your computer.
There’s an obvious tension there and none of the extremes work, you (and your manager) need to find a way to balance both of your workloads to minimize stepping on each others toes, just like with any other coworker.
The article isn't for programmers, it's for non-programmers (like management) who think it is mostly typing, and describing what's going on when we're not typing.
It's not nearly as unproductive as my old PhD college professor who went on and on about the amount of time you lose per day moving your hand off your keyboard when you could be memorizing shortcuts and macros instead of working
An important difference is that in programming, it is often better to do the same thing with less code (result).
I don't mean producing cryptic code-golf-style code, but the aspect that all the stuff you produce you have to maintain. This is certainly different from a novel author who doesn't care so much about maintenance and is probably more concerned about the emotions that his text is producing.
If you ask a manager to hold an hour's meeting spread across 6 hours in 10 min slots you will get the funniest looks.
Yet developers are expected to complete a few hours of coding task in between an endless barrage of meetings, quick and short pings & syncups over slack/zoom.
For the few times I've had to work on the weekends at home, I've observed that the difference in the quality of work done over a (distraction free) weekend is much better than that of a hectic weekday.
> If you ask a manager to hold an hour's meeting spread across 6 hours in 10 min slots you will get the funniest looks.
This is a great analogy I haven’t heard it before. They think it’s like that quick work where you check your calendar and throw in your two cents on an email chain. It’s not. Much more like holding a meeting.
The horrible trap of this is being able to get so little work done during the day, that you end up risking any but possibly all of your otherwise free time compensating for some company's idiotic structure, and this is a catastrophe
This is why I work at night 80% of the time. It's absolutely not for everyone, it's not for every case, and the other 20% is coordination with daytime people, but the amount of productivity that comes from good uninterrupted hours long sessions is simply unmatched. Once again, not for everyone, probably not for most.
This and a high demand for my time is why I am roughly a magnitude more productive when I am in home office. Nobody bothers me there and if they do I can decide myself when to react.
If you want to tackle particularly hard problems and you get an interruption every 10 to 20 minutes you can just shelve the whole thing, because chances are you will just produce bullshit code that produces headache down the line.
I once led a project to develop a tool that tracks how people use their time in a large corporation. We designed it to be privacy-respecting, so it would log that you are using the Web browser, but not the specific URL, which is of course relevant (e.g. Intranet versus fb.com). Every now and then, a pop up would ask the user to self-rate how productive they feel, with a free-text field to comment. Again, not assigned to user IDs in order to respect privacy, or people would start lying to pretend to be super-human.
We wrote a Windows front end and a Scala back end for data gathering and roled it out to a group of volunteers (including devs, lawyers and finace people even). Sadly the project ran out of time and budget just as things were getting interesting (after a first round of data analysis), so we never published a paper about it.
We also looked at existing tools such as Rescue Time ( https://www.rescuetime.com/
) but decided an external cloud was not acceptable to store our internal productivity data.
Good programming is sometimes mostly thinking, because "no plan survives first contact with the enemy." Pragmatic programming is a judicious combination of planning and putting code to IDE, with the balance adapting to the use case.
This. Programming is mostly reconnaissance, not just thinking. If you don’t write code for days, you’re either fully aware of the problem surface or are just guessing it. There’s not much to think about in the latter case.
That’s an iteration of Peter Naur’s « Programming as Theory Building » that has been pivotal in my understanding of what programming really is about.
Programming is not about producing programs per se, it is about forming certain insights about affairs of the world, and eventually outputing code that is nothing more than a mere representation of the theory you have built.
Off topic. I'm not a developer but I do write code at work, on which some important internal processes depend. I get the impression that most people don't see what I do as work, engaged as they are in "busy" work. So I'm glad when I read things like this that my struggles are those of a real developer.
Developers need to learn how to think algorithmically. I still spend most of my time writing pseudocode and making diagrams (before with pen and paper, now with my iPad). It's the programmers' version of the Abraham Lincoln's quote "Give me six hours to chop down a tree and I will spend the first four sharpening the axe."
I don’t really know what “think algorithmically means,” but what I’d like to see as a lead engineer is for my seniors to think in terms of maintenance above all else. Nothing clever, nothing coupled, nothing DRY. It should be as dumb and durable as an AK47.
>I don’t really know what “think algorithmically means,”
I would say thinking about algorithms and data structures for algorithmic complexity not to explode.
>Nothing clever
A lot of devs use nested loops and List.remove()/indexOf() instead of maps, etc., the terrible performance gets accepted as the state of the art, and then you have to do complex workarounds not to call some treatments too often, etc., increasing the complexity.
Performance yields simplicity: a small increase in cleverness in some code can allow for a large reduction in complexity in all the code that uses it.
Whenever I do a library, I make it as fast as I can, for user code to be able to use it as carelessly as possible, and to avoid another library popping up when someone wants better performances.
We need this to be more prevalent. But the sad fact is most architects try to justify their position and high salaries by creating "robust" software. You know what I mean - factories over factories, micro services and what not.
If we kept it simple I don't think we would need many architects. We would just need experienced devs that know the codebase well and help with PRs and design processes, no need to call such a person 'architect', there's not much to architect in such a role.
I was shown what it means to write robust software by a guy with a PhD in... philosophy out of all things(so a literal philosophiae doctor).
Ironically enough it was nothing like what some architecture astronauts wring - just a set of simple to follow rules, like organizing files by domain, using immutable data structures and pure functions where reasonable etc.
Also I hadn't seen him use dependent types in the one project we worked together on and generics appeared only when it really made sense.
Apparently it boils down to using the right tools, not everything you've got at once.
I love how so much of distributed systems/robust software wisdom is basically: stop OOP. Go back to lambda.
OOP was a great concept initially. Somehow it got equated with the corporate driven insanity of attaching functions to data structures in arbitrary ways, and all the folly that follows. Because "objects" are easy to imagine and pure functions aren't? I don't know but I'd like to understand why corporations keep peddling programming paradigms that fundamentally detract from what computer science knows about managing complex distributed systems.
Depends. I haven’t come up with the rubric yet but it’s something like “don't abstract out functionality across data types”. I see this all the time: “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!” Invariably it ends up collapsing, and if the whole program is constructed this way, it becomes monstrous to untangle, like exponentially complicated on the order of abstractions. I think it’s just a breathtaking misunderstanding of what DRY means. It’s not literally “don’t repeat yourself”. It’s “encapsulate behaviors that you need to synchronize.”
Also, limit your abstractions’ external knowledge to zero.
The problem is that most developers don't not actually understand DRY. They see a few lines repeated a few times in different functions and create a mess of abstraction just to remove the repeated code. Eventually more conditions are added to the abstracted functions to handle more cases, and the complexity increases, all to avoid having to look at a couple lines of repeated code. This is not what DRY is about.
In my mind this is breaking down the problem into a relevant data structure and algorithms that operate on that data structure.
If for instance you used a tree but were constantly looking up an index in the tree you likely needed a flat array instead. The most basic example of this is sorting, obviously but the same basic concepts apply to many many problems.
I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic... Most "coders" are glorified secretaries who now just automate what would have been done by a secretary before.
Call service A (database/ S3 etc), remove irrelevant data, send to service B, give feedback.
It's just significantly harder to do this in a computer than for a human to do it. For instance if I give you a list of names but some of them have letters swapped around you could likely easily see that and correct it. To do that "algorithmically" is likely impossible and hence ML and NLP became a thing. And data validation on user input.
So algorithmically in the modern sense is more, follow these steps exactly to produce this outcome and generating user flows where that is the only option.
Human do logic much much better than computers but I think the conclusion has become that the worst computer program is probably better at it that the average human. Just look at many niche products catered to X wealth group. I could have a cheap bank account and do exactly what is required by that bank account or I can pay a lot of money and have a private banker that I can call and they will interpret what I say into the actions that actually need to happen... I feel I am struggling to actually write what's in my mind but hopefully that gives you an idea...
To answer your nothing clever , well clever is relative. If I have some code which is effectively a array and an algorithm to remove index 'X' from it, would it be "clever" code to you if that array was labeled "Carousel" and I used the exact same generic algorithms to insert or remove elements from the carousel?
For most developers these days they expect to have a class of some sort with a .append and .remove function but why isn't it just an array of structs which use the exact same functions as every single other array... That people generally will complain that that code is "clever" but in reality it is really dumb. I can see it's clearly an array being operated on but OOP has caused brain rot and developers actually don't know what that means... Wait maybe that was OPs point... People no longer think algorithmically.
> I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic...
This is true and is the cause of much frustration everywhere. Employers want “good” devs, so they do complicated interviews testing advanced coding ability. And then the actual workload is equal parts gluing CRUD components together, cosmetic changes to keep stakeholders happy, and standing round the water cooler raging at all the organisational things you can’t change.
it's an odd analogy because programs are complex systems and involve interaction between countless of people. With large software projects you don't even know where you want to go or what's going to happen until you work. A large project doesn't fit into some pre-planned algorithm in anyone's head, it's a living thing.
diagrams and this kind of planning is mostly a waste of time to be honest. You just need to start to work, and rework if necessary. This article is basically the peak of the bell curve meme. It's not 90% thinking, it's 10% thinking and 90% "just type".
Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.
Your part of your comment doesn't fit with the rest. With complex projects, you often don't even know exactly what you're building, it doesn't make sense to start coding. You first need to build a conceptual model, discuss it with the interested parties and only then start building. Diagrams are very useful to solidify your design and communicate it to others.
There's a weird tension between planning and itterating.
You can never forsee anywhere close to enough with just planning. But if you just start without a plan you can easily work yourself into a dead end.
So you need enough planning to avoid the dead ends, whilst starting early enough so you get your reality checks so you have enough information to get to an actual solution.
Relevant factors here are how cheaply you can detect failure (in terms of time, material, political capital, team morale) and how easily you can backtrack out of a bad design decision (in terms of political capital, how much other things need to be redone due to coupling, and other limitations).
The earlier you can detect bad decisions, and the easier you can revert them, the less planning you need. But sometimes those are difficult.
It also suggests that continuous validation and forward looking to detect bad decisions early can be warranted. Something which I myself need to get better at.
> Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.
I'm tempted to break out the notebook again, but... beyond something that's already merged, what situations make paper changes cheaper than code changes? I can type way faster than I can write.
Do you have any resources for this? especially for the adhd kind - I end up going down rabbit holes in the planning part. How do you deal with information overload and overwhelm OR the exploration exploitation dilemma?
There are 2 bad habits in programming: people that start writing code the 1st second, and people that keep thinking and investigating for months without writing any code. My solution to that: just force to do the opposite. In your case: start writing code immediately. Ni matter how bad or good. Look the youtube channel “tsoding daily” he just goes ahead. The code is not always the best, but he gets things done. He does research offline (you can tell) but if you find yourself doing just research, reading and thinking, force yourself to actually start writing code.
I wonder if good REPL habits could help the ADHD brain?
It still feels like you are coding so your brain is attached, but with rapid prototyping you are also designing, moving parts around to see where they would fit best.
Doing it right, with only manual tools, I believe so, remembering back to one of the elder firefighters that taught me (who was also an old-school forester).
Takes about 20 minutes to sharpen a chainsaw chain these days though...
LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.
One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.
that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms
I have a similar experience. Just thought it'd be cute to ask you both for sources. Interesting that asking you for sources got me upvoted, while asking the other guy for sources got me downvoted :)
Various programming paradigms (modular programming, object-oriented, functional, test-driven etc) have developed to reduce precisely this cognitive load. The idea being that it is easier to reason and solve problems that are broken down into smaller pieces.
But its an incomplete revolution. If you look at the UML diagram of a fully developed application its a mess of interlocked pieces.
Things get particularly hard to reason about when you add concurrency.
One could hypothesize that programming languages that "help thinking" are more productive / popular but not sure how one would test it.
One of the things I find interesting about the limited general intelligence of language models (about which I tend to be pretty deflationary) is that my own thought processes during programming are almost entirely non-verbal. I would find it to be incredibly constraining to have to express all my intermediate thoughts in the form of text. It is one of the things that makes me think we need a leap or two before AGI.
Most software developer jobs are not really programming jobs. By that I mean the amount of code written is fairly insignificant to the amount of "other" work which is mainly integration/assembly/glue work, or testing/verification work.
There definitely are some jobs and some occasions where writing code is the dominant task, but in general the trend I've seen over the past 20 years I've been working has been more and more to "make these two things fit together properly" and "why aren't these two things fitting together properly?"
So in part I think we do a disservice to people entering this industry when we emphasize the "coding" or "programming" aspects in education and training. Programming language skills are great, but very important is systems design and integration skills.
Writers get a bit dualistic on this topic. It's not "sitting in a hammock and dreaming up the the entire project" vs. "hacking it out in a few sprints with no plan". You can hack on code in the morning, sit in a hammock that afternoon, and deliver production code the next day. It's not either-or. It's a positive feedback loop between thinking and doing that improves both.
In the 2020s, we still have software engineering managers that think of LOC as a success metric.
“How long would it take you to type the 6 hours work of diff?” is a great question to force the cognitively lazy software manager to figure out how naive that is.
Nowadays I feel great when my PRs have more lines removed than added. And I really question if the added code was worth the added value if it’s the opposite.
Author listed a handful of the thinking aspects that take up the 11/12 non-motion work.. but left out naming things! The amount of time in conversation about naming, or even renaming the things I've already named.. there's even a name for it in the extreme, bikeshedding. Even sometimes I'll be fixated on how to phrase the comments for a function or even reformat things for line lengths to fit.
I would absolutely agree, for any interesting programming problem. Certainly, the kind of programming I enjoy requires lots of thought and planning.
That said, don't underestimate how much boilerplate code is produced. Yet another webshop, yet another forum, yet another customization of that ERP or CRM system. Crank it out, fast and cheap.
Maybe that's the difference between "coding" and "programming"?
> Maybe that's the difference between "coding" and "programming"?
I know I'm not alone in using these terms to distinguish between each mode of my own work. There is overlap, but coding is typing, remembering names, syntax, etc. whereas programming is design or "mostly thinking".
I usually think of coding and programming as fairly interchangeable words (vs “developing”, which I think encapsulates both the design/thinking and typing/coding aspects of the process better)
Implementing known solutions is less thinking and more typing, but on the other hand it feels like CoPilot and so on is changing that. If you have something straightforward to build, you know the broad strokes of how it's going to come together, the actual output of the code is so greatly accelerated now that whatever thinking is left takes a proportionally higher chunk of time.
... and "whatever is left" is the thinking and planning side of things, which even in its diminished role in implementing a known solution, still comes into play every once in a while.
Agree and disagree. Certain programming domains and problems are mostly thinking. Bug fixing is often debugging, reading and comprehension rather than thinking. Shitting out CRUD interfaces after you've done it a few times is not really thinking.
Other posters have it right I think. Fluency with the requisite domains greatly reduces the thinking time of programming.
It's not just thinking though. You're not sitting at your desk quietly running simulations in your head, and if a non programmer was watching you debug it would look very busy.
I'd wager the more technically fluent people get the more they spend time on thinking about the bigger picture or the edge cases.
Bug fixing is probably one of the best example: if you're already underwater you'll want to bandaid a solution. But the faster you can implement a fix the more you'll have leeway, and the more durable you'll try to make it, including trying to fix root causes, or prevent similar cases altogether.
Fluency in bug fixing looks like, "there was an unhandled concurrency error on write in the message importing service therefore I will implement a retry from the point of loading the message" and then you just do that. There are only a few appropriate ways to handle concurrency errors so once you have done it a few times, you are just picking the pattern that fits this particular case.
One might say, "yes but if you see so many concurrency related bugs, what is the root cause and why don't you do that?" And sometimes the answer is just, "I work on a codebase that is 20 years old with hundreds of services and each one needs to have appropriate error handling on a case by case basis to suit the specific service so the root cause fix is going and doing that 100 times."
It is an iterative process, unless “to code” is narrowly defined as “entering instructions” with a keyboard. Writing influences design and vice versa.
A good analogy that works for me is writing long form content. You need clear thought process and some idea of what you want to say, before you start writing. But then thinking also gets refined as you write.
This stretches further: a English lit major who specialises as a writer (journalist?) writing about a topic with notes from a collection experts and a scientist writing a paper are two different activities. Most professional programming is of the former variety admitting templates / standardisation.
The latter case requires a lot more thinking work on the material before it gets written down.
I tend to spend a lot of time in REPL or throwaway unit tests, tinkering out solutions. It helps me think to try things out in practice, sometimes visualising with a diagramming language or canvas. Elixir is especially nice in this regard, the language is well designed for quick prototypes but also has industrial grade inspection and monitoring readily available.
Walks, hikes and strolls are also a good techniques for figuring things out, famously preferred by philosophers like Kierkegaard and Nietzsche.
Sometimes I've brought it up with non-technical bosses how development is actually done, didn't work, they just dug in about monitoring and control and whatnot. Working remote is the way to go, if one can't find good management to sell to.
At my previous job, I calculated that over the last year I worked there, I wrote 80 lines of non-test, production code. 80. About one line per 3-4 days of work. I think I could have retyped all the code I wrote that year in less than an hour.
The rest of the time? Spent in two daily stand up meetings [1], each at least 40 minutes long (and just shy of half of them lasted longer than three hours).
I should also say the code base was C, C++ and Lua, and had nothing to do with the web.
[1] Because my new manager hated the one daily standup with other teams, so he insisted on just our team having one.
Were the intense daily meetings any help? I can imagine that if there's a ticket to be solved, and I can talk about the problem for 40 minutes to more experienced coworkers, that actually speeds up the necessary dev time by quite a lot.
Of course, it will probably just devolve into a disengaged group of people that read emails or Slack in another window, so there's that.
Not really. It was mostly about tests. Officially, I was a developer for the team (longest one on the team). Unofficially I was QA, as our new manager shut out QA entirely [1] and I became the "go-to" person for tests. Never a question about how the system worked as a whole, just test test tests testing tests tests write the new tests did you write the new tests how do we run the tests aaaaaaaaaaah! Never mind that I thought I had a simple test harness set up, nope. They were completely baffled by the thought of automation it seems.
[1] "Because I don't want them to be biased by knowing the implementation when testing" but in reality, quality went to hell.
Who hasn't accidentally thrown away a days worth of work with the wrong rm or git command? It is indeed significantly quicker to recreate a piece of work and usually the code quality improves for me.
In the software engineering literature, there is something known as "second system effect": the second time a system is designed it will be bloated, over-engineered and ultimately fail, because people want to do it all better, too much so for anyone's good.
But it seems this is only true for complete system designs from scratch after a first system has already been deployed, not for the small "deleted some code and now I'm rewriting it quickly" incidents (for which there is no special term yet?).
I think this was the reasoning behind the adage "make it work, make it right, make it fast" (or something along those lines).
You'd do a fairly rough draft of the project first, just trying to make it do what you intend it to. Then you'd rewrite it so it works without glaring bugs or issues, then optimise it to make it better/more performant/more clearly organised after that.
Absolutely. Parallel to thinking LOC is a good metric, comes with "we have to reuse code" Because lots of people think, writing the code is very expensive. It is not!
writing it is not expensive. however, fixing the same bug in all the redundant reimplementations, adding the same feature to all of them, and keeping straight the minor differences between them, is expensive
Not only fixing the same bug twice, but also fixing bugs that happen because of using the same functionality in different places. For example, possible inconsistency that results from maintaining state in multiple different locations can be a nightmare, esp. in hard-to-debug systems like highly parallelized or distributed architecture.
I see "code that looks the same" being treated as "code that means the same" resulting in problems much more often than "code that means the same" being duplicated.
can you clarify your comment with some examples, because i'm not sure what you mean, even whether you intend to disagree with the parent comment or agree with it
Lets take pending orders and paid for orders as an example.
For both, there is a function to remove an order-line that looks the same. Many people would be tempted at that point, to abstract over pending and paid orders to have both reference the same function, via adding a base class of order, for example, because the code looks the same.
But for a pending order, it means removing an item from the basket, while for the paid for order, it means removing an item due to unavailability. So the code means different things.
Lets then take the system to have evolved further, where now on the paid for order some additional logic should be kicked off when an item is removed. If both pending and paid for orders reference the same function, you have to add conditionals or something, while if each has its own, they can evolve independantly.
And it definitely is disagreement with the parent comment. Sorry to have not elaborated on it in the first place.
i've never lost work to a wrong git command because i know how to use `git reflog` and `gitk`. it's possible to lose work with git (by not checking it in, `rm -r`g the work tree with the reflog in it, or having a catastrophic hardware failure) but it is rare enough i haven't had it happen yet
Famuosly described as the tale of the two programmers - the one who spends their time thinking will provide exponentially better work, though because good work look obvious they will often not receive credit.
Sometimes you really wonder where your time went. You can spend 1 hour writing a perfect function and then the rest of the day figuring out why your import does work in dev and not in prod.
I also once butchered the result of 40 hours of work through a loose git history rewrite.
I spent a good hour trying different recovery options (to no avail) and then 2 hours typing everything back in from memory. Maybe it turned out even better then before, because all kind of debugging clutter was removed.
Sometimes, I spend an hour writing a perfect function, and then spend another hour re-reading the beauty of it, just to be pointed out how imperfect the function is in the PR review :))
This is also relevant in the context of using LLMs to help you code.
The way I see it, programming is about “20% syntax and 80% wisdom”. At least as it stands today.
LLMs are good (perhaps even great) for the 20% that is syntax related but much less useful for the 80% that is wisdom related.
I can certainly see how those ratios might change over time as LLMs (or their progeny) get more and more capable. But for now, that ratio feels about right.
Well, exactly. That’s kinda my point. Programming is so much more than just writing code. And sadly, some of that other stuff involves dealing with humans and their manifest idiosyncrasies.
Programming is mostly thinking just like many other knowledge related jobs, it's nothing special. This analogy can be used for most design related jobs, not just software.
If I had my client report deleted today, I could rewrite that report in nearly half the time the next day because I remember the way in which I structured the report, the sentences that flowed that didn't and the way I conveyed the findings in a neutral way.
No different to designing a physical product. You might see the outcome, but you don't see the numerous design iterations and compromises that came along the way which speak to the design that came out the end.
Furthermore, there are going to be programmers that can write more code with less thought, because they might have many years of pre-thought knowledge and understanding that allows them to execute faster.
This article sounds like it was written for a manager that doesn't see the value in the work performed, or simply doesn't understand design related work.
This is why I just don't care about my keyboard, mouse, monitor etc beyond a baseline of minimum comfort.
Typing at an extra 15 wpm won't make a lick of difference in how quickly I produce a product, nor will how often my fingers leave the keyboard or how often I look at the screen. Once I've ingested the problem space and parameters, it all happens in my head.
I often feel that having a "comfortable" keyboard/mouse/monitor is more important than a fast CPU or a fancy graphics card - just because of that slight extra feeling of pleasure/ease that lasts all day long :-).
The advantage of them is that my monitors and keyboards usually last a long time so putting money into them is not as wasteful as putting it into some other components.
One thing that surprised me though is that I recently bought a KVM to switch from desktop to laptop instead of a second monitor and this turned out to be both better and much cheaper. I gave away an older monitor to a relative and found that not having to turn to look at a 2nd monitor was actually nicer. Initially I really didn't want to do this and really wanted another screen but I had to admit afterwards that 1 screen + KVM was better for me.
RAM and disc space just matter up to the point of having enough so that I'm not wasting time trying to manage them to get work done.
It was a very cheap thing off Amazon. I'm in the UK so you might not have it - the brand name is "VPFET KVM Switch 2x1" and it has 4 usb, 1 HDMI outputs and 2x(1 HDMI,1 usb) inputs.
It's the cheapest in their range, I think (about £30) - they have better ones.
Not massively flexible. Has a clicker switch which you could put on the floor if you wanted to let you flip displays. Not super fast at switching.....but it does the job for me. YMMV!
I use a 32-inch Viewsonic monitor with this. It's the most expensive monitor I've ever bought but it's nothing special when you look at what's out there. I't just lovely to use. :-) I think a purist would complain bitterly about refresh rates or whatever but I just love it and I spend my time reading web pages or code or watching the odd video.
When I'm writing something from scratch in a few months I can bash it all out on a small laptop - it is (as you say) all in my head, I just need to turn it into working code.
If I'm faced with some complicated debugging of a big existing system, or I've inherited someone elses project, that gets much easier with a couple of giant monitors to look at numerous files side by side - plus a beefier machine to reduce compile/run times as I'll need to do that every few mins.
You may care more about picking a keyboard & mouse/trackpad/trackball/etc if/when you start to experience pain in your wrists/hands and realise the potential impact on your career if it worsens! Similar situation with seating and back pain.
I’ve faced this viscerally with copilot. I picked it up to type for me after a sport injury. Last week, though, I had an experience; I was falling ill and “not all there.” None of the AI tools were of any help. Copilot is like a magical thought finisher, but if I don’t have the thought in the first place, everything stops.
In my experience quite a lot of programming is thinking, though quite a lot is also communication, if you work in a team
Typing out comments on PRs, reproducing issues, modifying existing code to test hypotheses, discussing engineering options with colleagues, finding edge cases, helping QA other tickets, and so on. I guess this is all "thinking," too, but it often involves a keyboard, typing, and other people
A lower but still significant portion is "overhead." Switching branches, syncing branches, rebasing changes, fixing merge conflicts, setting up feature flags, fixing CI workflows, and so on
Depending on the size of the team, my gut feel for time consumption is:
Communication > Thinking > Writing Code > Overhead
When you work for companies that take programming seriously (e.g., banks, governments, automotive, medical equipment, etc.), a huge development cycle occurs before a single line of code is written.
Here are some key development phases (not all companies use all of them):
1. high level definition, use cases, dependencies; traceability to customer needs; previous correction (aka failures!) alignment
2. key algorithms, state machines, flow charts (especially when modeling fail-safety in medical devices)
3. API, error handling, unit test plan, functional test plan, performance test plan
4. Alignment with compliance rules; attack modeling; environmental catastrophe and state actor planning (my experience with banks)
After all of this is reviewed and signed-off, THEN you start writing code.
This is what code development looks like when there are people's, business's, and government's lives/money/security on the line.
And it's a terrible way to make anything, much less software. It's more forgivable when the cost of outer iteration is high because you're making, say, a train, but even then you design around various levels of simulation, iterating in the virtual world. The idea that you can nail down all the requirements and interfaces before you even begin is why so many projects of the type you describe often have huge cost overruns as reality highlights all the changes that need to be made.
So you think it is smarter to, say, tell the carpenters “just start building a house” without giving them plans?
What you described is not how successful products are built and maintained; what you described is why we have world full of lots of shitty tech from “move fast and break things” ADHD-like management and young programmers that think they know everything and cry about having to do thinky work first. Literally the worst kind of programmers to have on a project.
I'm talking about engineering, not manufacturing. I'm not suggesting not thinking, I'm saying that you cannot design every aspect of a system without learning more about the design. It's just a restatement of "Gall's law" (in scare quotes because it's obviously not an actual law). Alternatively it's the obvious way of working given the principles espoused in the agile manifesto.
Your absolutism is at odds with reality. Engineering and manufacturing are literally joined at the hip, and both require a significant amount of planning. If you disagree with this, well, good luck with your engineering career, is all I can say!
Precisely! The idea that you can set down the requirements and all the design decisions before you even start thinking about the implementation is completely flawed.
The engineering/manufacturing dichotomy was just with respect to your statement about carpenters. Even then, expediency on the part of the manufacturer will often result in design changes.
I see this so often. It's how terrible software is written because people are afraid to change direction or learn anything new mid project.
I rewrite most of my code 2-3 times before I'm done and I'm still 5x faster than anyone else, and significantly higher quality and maintainability as well. People spend twice as long writing the ugliest, hackiest code as they would have to just learn to do it right
> companies that take programming seriously (e.g., banks, governments, automotive, medical equipment, etc.)
(Some) banks (sometimes) hire armies of juniors from Accenture. I wouldn't say they take programming seriously.
My government had some very botched attempts at creating computer systems. They're doing better these days, creating relatively simple systems. But I wouldn't say they're particularly excellent at programming.
Somehow the post seemed to miss the amount of time I seem to spend on "busy work" in coding. Not meetings and that sort of thing, but just boring, repetitive, non-thinking coding.
Stubbing in a routine is fast, but then adding param checking, propagating errors, clean up, adding comments, refactoring sections of code into their own files as the code balloons, etc.
There's no thinking involved in most of the time I am coding. And thank god. If 11/12 of my time was spent twisting my brain into knots to understand the nuances of concurrency or thread-locking I would have lost my hair decades earlier.
In my experience, the main issue with learning to code are all the contexts you have to remember.
There is the static type level and the runtime level, all the scopes (class, object, function, nested functions, closures, etc.), the different machines (client, server, or even multiple servers, etc.)
That's probably the reason why most devs start with dynamic languages and frontend/mobile. It can get just as complex as backend development, but at least you can eliminate a bunch of contexts when starting to code, learn what's left and then add contexts later.
In my experience (slightly tweaked towards data science and AI research), programming is also experimenting, be it playing with the data, testing a new idea or even debugging.
There is a reason why LISPers (I'm not one of them though) praise so much REPL-driven development: the REPL is a laboratory.
Also, how much time did you spend testing a library, looking on the internet how to install it, debugging the weird compiler errors? In the end you found the one option that made everything work but it took you hours of research and experimentation.
That's also why it is difficult for me to tell what I've done in the standup meeting on another day. If I tell people I just spend time on "thinking", they will say it is too vague.
If you say "I was analysing the problem, and evaluating different solutions, and weighting the pros and cons, for example, I thought about [insert option A] but I see [insert contra here]. So I also thought about [option B]... I'm still researching in [insert source]". I do not think that should be a problem. Is what I say constantly in our daily's. And of course, eventually I deliver something.
I was introduced to a quote from cartoonist, Guindon: "Writing is nature’s way of letting you know how sloppy your thinking is." From Leslie Lamport.
Although in my experience, thinking is something that is not always highly valued among programmers. There are plenty of Very Senior developers who will recommend, "just write the code and don't think about it too hard, you'll probably be wrong anyway." The whole, "working code over documentation and plans," part of the Agile Manifesto probably had an outsized influence on this line of reasoning. And honestly it's sometimes the best way to go: the task is trivial and too much thought would be wasted effort. By writing the code you will make clear the problem you're trying to solve.
The problem with this is when it becomes dogma. Humans have a tendency to avoid thinking. If there's a catchy maxim or a "rule" from authority that we can use to avoid thinking we'll tend to follow it. It becomes easy for a programmer to sink into this since we have to make so many decisions that we can become fatigued by the overwhelming number of them we have to make. Shortcuts are useful.
Add to this conflict the friction we have with capital owners and the managerial class. They have no idea how to value our work and manage what we do. Salaries are games of negotiation in much of the world. The interview process is... varied and largely ineffective. And unless you produce value you'll be pushed out. But what do they value? Money? Time? Correctness? Maintainability?
And then there's my own bone to pick with the industry. We gather requirements and we write specifications... some times. And what do we get when we ask to see the specifications? A bunch of prose text and some arrows and boxes? Sure, some one spent a lot of time thinking about those arrows and boxes but they've used very little rigor. Imagine applying to build a sky scraper and instead of handing in a blueprint you give someone a sketch you drew in an afternoon. That works for building sheds but we do this and expect to build sky scrapers! The software industry is largely allergic to formalism and rigor.
So... while I agree, I think the amount of thinking that goes into it varies quite a lot based on what you're doing. 11/12? Maybe for some projects. Sometimes it's 1/2. Sometimes less.
I once did a weeks worth of 'debugging/looking at code, trying stuff and testing' before committing my two lines of changed code. About 40 hours of work to produce what would take less than a minute to type.
It wasn't even the worst code base I've inherited but it is the worst lines of code per hours worked I've ever had.
Well of course it is. With progress in knowledge, there comes a stage before that. That stage we can call learning. I don't care how you bend it, learning involves thinking in its base of it all.
So thinking does not merely make you a better programmer. It makes you a better programmer at every given time . Your progression never stops and like I just said, it's simply called learning.
Somemone say success is 1/10 genius 9/10 hard work (and a big supply of pencil paper and erasers), to -create- something (software too) is understanding how to do it right, not only how to do it work, is know the 'rules' (often standing on the shouders of giants) to maybe one day update (break too) those rules.
> typing and tools are not the most important aid to quick code production
I think they are in an indirect way. Being able to touch type reasonably fast prevents the action of typing distracting you from your thoughts, or lagging so far behind your thoughts that it makes it harder to keep your thoughts clear in your mind.
I think OP means that problems are just too hard to solve without thinking Before starting to type. So basically the typing in part is decoupled from the real thinking, and just about picking up half ready ideas from the queue and deserialize it, so to say, as code.
IMHO Programming is mostly iterating (which involves thinking).
The title, read alone, with only a brief skim of the article might give PHBs the notion that you can just sit developers in a room and let them “think” for 5.5 hours and the hand them their keyboards for the last 30 minutes and get the product you want.
My architect introduced Architecture Decision Records to my company and it's made a world of difference. Are there other tools and mechanisms people have found helpful to facilitate and scale group thinking?
Programming is indeed thinking but thinking through coding. Think -> Code -> Evaluate. If you remove the thinking you have a code monkey. If you remove the coding you have analysis paralysis.
Code is more flexible than text for drafting the idea. Later on it will expand into a fully working solution. Defining non functional interface definitions and data objects for "scribbling" the use case works best for me.
This really tracks for me. Personally, I have abyssal typing skills and speed. Not only has this never limited my productivity as a software developer, in some ways it has forced me to be a better developer.
I love this type of interview. I code with the goal of showing how it's done, not how much I can do, and they very soon realize that they won't see finished code on "this" call. It's an opportunity to teach non-coders that hidden intricacies exist in even the smallest ticket. Usually just a few minutes in they're fatigued, and they know who I am.
> Sadly, overnight the version control system crashes and they have to recover from the previous day's backup. You have lost an entire day's work.
If I give you the diff, how long will it take you to type the changes back into the code base and recover your six-hours' work?
I accidently deleted about 2 weeks of my work once. I was a junior dev and didn't really understood how svn works :). It took about 2 days to finish the job, but I had no diff and it wasn't 100% ready, so after recreating what I had I still had to spend about a day to finish it.
Now with Github Copilot it's even worse/better – whether you like to type or not. Now it's 1) think about the problem, 2) sketch types and functions (leave the body empty), 3) supervise Copilot as it's generating the rest, 4) profit.
I think this is true for certain categories of coding more so than others.
Copilot is above all fantastic for outputting routine tasks and boilerplate. If you can, through the power of thinking, transform your problem into a sequence of such well-formed already-solved tasks, it can basically be convinced to do all the key-pressing for you.
An iterative model which has been up-front loaded with a firm architecture, feature elaboration, a rough development and testing plan, resources allocated and some basic milestones to hit so that upper mgmt. can get an idea when useful stuff might land.
The development _process_ can be as agile-y as you like, so long as the development of features moves incrementally with each iteration towards the desired end-goal.
I'm not sure that follows. If most programming is thinking, then it makes sense to minimise the amount of thinking that is wasted when circumstances later change.
In my experience, of some decades, with programming of wide variety, in recent decades
"Programming Is Mostly"
(1)
making sense out of missing or, as technical writing, badly written documentation
needed to do the
(2) mud wrestling of system management.
Example 1: Had everything else working then needed a connection string or some such between my Visual Basic .NET code and SQL Server. Struggled with (1) and (2) full time for over a week, with frustration rising like a South Pacific volcano getting ready to erupt, finally reached by phone some relatively high executive who was surprised that anything so routine could be so difficult and quickly gave me the secret. Then my work raced ahead, no problems, and all my code worked fine.
Example 2: Wanted a way for occasional, simple, routine file copying between two computers on a LAN (local area network) from both being connected to the same cable modem. Looked at the old "NET SHARE" and "NET USE", encountered mostly unclear, obscure, undefined technical terminology, tried to make sense out of the obscure syntax diagrams, observed that apparently nearly none of the many combinations of claimed options actually worked, and eventually, from two weeks of full time work, like throwing mud against a wall to see what would stick, underhand, back spin, side arm, etc., did a lot of guessing about what could be wrong and why (mostly involving computer security), discovered that likely computer A might be able to initiate a read from computer B but no way could computer B initiate a write to computer A, etc., got somethings working, but still need to document the effort.
More problems on the horizon ahead:
Problem 1: How to have on one computer at least two instances of the operating system on separate (bootable) hard disks and be able routinely (a) to back-up to additional internal and/or external hard disks all of the installed instances or restore from those backups and (b) recover quickly in case any hard disk fails or any bootable instance becomes corrupted.
Problem 2: How to install PostGreSQL on Windows 7 Professional and/or Windows Server 2019 and get code in Visual Basic .NET to use PostGreSQL. Is there some good documentation?????? Will be thrilled to learn that in this case there is no problem!!!
Once (1) and (2) are out of the way and, as results, the basic tools are working, the APIs (application programming interfaces) are well documented,
then the syntax and semantics of the programming languages are easy and the programming itself is the easy, short, sweet dessert after all the nonsense before. The heap data structure, B+-trees, key-value stores, the .NET platform invoke, matrix inversion and eigenvalues, the Gleason bound, .NET managed memory, semiphores for controlling multiple threads, base64 encoding, HTML, etc. -- all really simple, fast, easy. (1) and (2) -- well over 70% of the effort.
I think it was meant as a parallel; historically light cavalry (explorers) and heavy cavalry (exploiters) had different ideals*, different command structures, and when possible even used different breeds of horses.
Compare the sorts of teams that do prototyping and the sorts of teams that work to a Gantt chart.
* the ideal light cav trooper was jockey-sized, highly observant, and was already a good horseman before enlisting; the ideal heavy cav trooper was imposing, obedient, and was taught just enough horsemanship to carry out orders but not so much that he could go AWOL.
You could see the recon as the thinking from the article. The enemy and terrain are the code and various risks and effects associated with changing it.
I certainly notice folks who code about 30 minutes a day line-wise, but that's just because they're distracted, or don't care.
Also, very very rarely is someone just sitting around and pondering the best solution. It happens, and yes it's necessary, but that's forgetting that for so much work the solution is already there, because one has already solved it a thousand times!
This article is straight gibberish except for perhaps a small corner of the industry, or beginners.
To me, it's the exact opposite. It's beginners who spend a lot of time coding, because of inexperience, and bad planning. The first thing they do when they have a problem, is to open their editor and start coding. [1]
I have been in this career for 20 years, I'm running my solo company now, and I'd say I spend on average 2 hours coding a day. I spent 10 hours a day just thinking, strategizing, but also planning major features and how to implement them efficiently. Every time I sit down to code something without having planned it, played with it or left it to simmer in my subconscious for a couple days, I over-engineer or spend time trying an incorrect approach that I will have to delete and start again. When I was an employee, the best code was created when I was allowed to take a notepad, a cup of coffee and play with a problem away from my desk, for however long I needed.
One hour of thinking is worth ten hours of terrible code.
---
1: If our programming languages were better, I would do the same. But apart from niche languages like Lisp, modern languages are not made for exploratory programming, where you play and peel a problem like an onion, in a live and persistent environment. So planning and thinking are very important simply because our modern approach to computing is suboptimal and unrefined.
It's also: "Damn I just wrote this whole new set of functions while I could have added some stuff to this existing class and it would have been more elegant... Let me start over..."
"Really good developers do 90% or more of the work before they ever touch the keyboard;"
While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.
My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.
In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.
This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.
In yet different words: writing code should be actually seen as part of the "thinking process".