I've always found AoC to be added stress during a time of the year I'm not really running a stress-deficit. I find the competitive aspect a pretty big turn off. I usually get a few puzzles in and then burn out, because I can't keep up with the people who do it every day (I don't even use my computer every day).
I wish the scoring didn't prioritize timeliness. This is really the sort of thing I'd enjoy cranking out on a plane ride, not every day.
I enjoy doing these too and would prefer if there were no scoring at all. When I start right at release time, it stresses me out. So I try to find reasons to introduce small delays that guarantee I'm not in the top hundred, and the stress goes away.
If you're not using your computer every day, just think of yourself as having been removed from the competition. Then you can just solve fun puzzles that don't have a competitive aspect for you.
You could also make a browser extension to hide the leaderboard if that helps.
I just began a shared slack channel with my co-workers, and put the call out for any other interested hobbyists from other departments. We'll collaborate and discuss there, and ignore the public boards.
I found in previous years I gave up a few puzzles in, so I need some community to keep the interest up. But I have zero interest in the competitive aspect.
Not Slack, but this is what I did with colleagues last year (I first participated in 2018, got a group in 2019). It was fun, moderately competitive (between myself and one other guy, the rest were just solving problems but he is always competitive, and I like to goad him), and we got to teach some of the young guys at the office things they should've learned in their CS education (but somehow didn't).
I suspect that thy learned these things, but it didn't stick. From memory of 2018/2019 things that the younger colleagues didn't know (or didn't know well enough to get without prompting):
- When to use CFG versus regex for parsing. Many inputs could be parsed with regexes if you made certain assumptions and got lucky. But CFGs were much easier for some.
- Shunting-yard algorithm. This came up, I think, in 2018.
- Using fast/slow to detect cycles and cycle length (one step at a time versus two, when they collide you can determine that there is a cycle and then determine how long the cycle is). Most used a hash table/map but this was not effective (due to RAM requirements) for some very large inputs. This actually comes up a few times. Variants of game of life, or just numeric processes.
- Multi-threading. It was very useful to implement the intcode VM using threads in 2019. Everyone who tried to juggle state and run multiple VMs at a time struggled when we later had to have a large number of VMs communicating with each other.
- Maybe not CS proper, but typically covered as part of a CS curriculum in a discrete math course. Several times problems related to modular arithmetic and permutations came up. If you understood them, the problem was tractable. If not, you struggled and maybe solved it but it wasn't efficient.
- Various graph and search algorithms. Particularly A* and Dijkstra's. Looking back, it seems there were a couple maze ones in 2019 and several more maze/path finding ones in 2018.
- Sorting/ordering of graph nodes, specifically topological sort. If you know what it is, it makes several of the problems much simpler over the years. I think it was specifically used in 2018.
Excellent comment, I did both parts last year up to day 17 and just had to stop, but learning about things like A* were really what made it worthwhile.
I have no formal CS training, but did a lot of online courses and so I use events like this to help plug gaps and learn new languages, and it would have been great to have a mentor like yourself through some of these puzzles last year as it can be quite hard to solve algorithm knowledge gaps etc. when also quickly trying to roll your own solutions.
Still, really a cool thing. I couldn't resist starting again today.
I'm in the same boat as you having no formal college/university CS training. You've given me a good idea for a blog series. Writing posts that go through the solutions I came up with for each day of AoC if I can do so without benefit of knowledge of useful algorithms & data structure concepts, and then reading the best solutions I can find on reddit etc, and write a blog post on how it works, as well as what underlying concepts it utilizes, and how they work. If I do that for all days/years of AoC, I could post a summary of the most useful concepts that I learned, that were applied to the most problems of AoC.
Same. The beginning is fun but then it get too hard for my taste. I don't have enough time nor motivation. It looks like many people appreciate the difficulty though.
Even though I enjoy competing for the leaderboard, the creator is pretty up-front about the leaderboard being Not The Point and advises you to ignore it, and from that perspective the midnight release time makes more sense.
You can't make a leaderboard and then say it's not important. Especially when you don't even opt into it, you just...get on the leaderboard.
If the creator was genuine about it not being important they'd explicitly limit the leaderboard with an opt in process. You shouldn't even be able to see the leaderboard unless you have opted into it. It shouldn't be visible or clickable.
Did you know HN has one? https://news.ycombinator.com/leaders .. It used to be a top link across the site but they got rid of it for similar "not the point" reasons.
Admittedly I did used to spend a lot more time commenting and posting here when it existed, and have since slipped way down :-)
Last year almost 100,000 people solved the first part of the first puzzle. Almost 3000 people solved the last part of the 25th and last puzzle. The leaderboard has only 100 entries.
It's important to a lot of people. I get that I'm in the minority here, but I can make it on the leaderboard a lot of the time if I try. But trying to get on the leaderboard stresses me out and ruins the fun, so I instead do things like start a bit late to sabotage myself.
I still make it on the leaderboard sometimes if I get too excited about starting right away or if it's a particularly hard problem and I get lucky figuring it out. That stresses me out a bit. I'd be happier if the leaderboard weren't there.
You could always solve this problem by setting yourself an extra "day 0" challenge: write a userscript that runs on the contest website and hides all references to the leaderboard.
I also find the scoring stressful, so I just ignore it and do the puzzles at my leisure. I still burn out/run out of time eventually, but it’s much more peaceful
last year I did that, I just did it without worrying about time, year before I did it as fast as I could ( as did a bunch of other HNers ), I got a fair few top 100 placements and finished the entire thing....super stressful, but fun. Last year, doing it in a relaxed manner, I never finished them, I'd just keep putting off doing the next puzzle.
This year I think I'll just watch from a distance. I think it is really worth doing it as fast as you can at least one year, it really highlighted a bunch of areas I was quite rusty in.
If you must look at the scores, you can now make your own private leaderboards. If you're the only member of the leaderboard, then you'll win every challenge.
This is dismissive and unnecessarily condescending. Is it possible to opt out of the public leaderboard entirely? Because if not - perhaps things have changed - the stressful part about AoC is having a publicly viewable permanent score based on arbitrary and unfair criteria. Simply “not looking at it” isn’t enough.
I did AoC a few years ago and realized that I had my GitHub profile associated to “middle-of-the-road” scores on a public list. It isn’t really a problem and I realize I may sound too sensitive: but it is a bit stressful to be ranked in a competition you have no interest in competing in. Think of an ignorant recruiter deciding to look at a “better” programmer higher on the list, etc. And in a community/society that places a huge premium on arbitrary scores like SAT and IQ, there’s a small twinge of unavoidable stress when you see “you completed 75,435th” simply because you decided to do a puzzle on a lunch break. I’d rather just avoid it entirely.
Likewise, there are many people desperately competing for a “top spot” that measures their ability to... drink coffee late at night, mostly. It’s just kind of gross and not something I feel like encouraging. Having an opt-in leaderboard would be fine, but AoC pushes it way too hard. The emphasis should be on fun daily puzzles, not an arbitrary competition based on who is most alert at a given hour.
I don't really understand this perspective at all. I've done a few of the previous AoCs to various levels of completion.
I don't think I've ever been quick enough to figure on the leaderboard. I'm not a competitive person so that aspect of it doesn't really appeal to me and I've been able to totally ignore it every year without it really intruding on me or causing me any stress related problems.
You're imaging a scenario which doesn't really exist with this "ignorant recruiter" situation. Why not just worry about that if / when it really does become a problem...?
We can look at a myriad of potential stress factors that come out of the design of so many every day items. We'd be overrun with worries if let them dominate our mind.
Just chill out and try to ignore the competitive aspect, its not that difficult
I used a cutesy synecdoche to illustrate the overall point that “AoC leaderboard ranking” is essentially uncorrelated with programming ability, despite how heavily the leaderboard is pushed on participants. Otherwise I think the rest of the comment is well-argued and I don’t think a full reading of my comment suggests that “drink coffee late at night” was literally the main deciding factor. In a literal sense, geographic location/time zone is a much stronger variable than coffee consumption. I realize the internet in general (and HN specifically) isn’t a great place for rhetoric like this.
Regardless, I feel like you’re being excessively pedantic as a means to..... police my tone without addressing any of the actual arguments.
You literally have to actively choose to look at the leaderboard. And you have to make your account link to your profile. So if you are capable of ignoring a thing that has no impact, and of not willfully publishing your connection to the site, you're in the clear.
Since only the first 100 get points, it's really easy to ignore the leaderboard. Odds are against you when (at least in the early days) 30-50k people are competing.
I just find the idea that anybody would judge you on the scores absolutely crazy. Have you looked at people's code?! Do I want to hire people who can create unreadable one liners, then rewrite them into another unreadable one liner for the second part of each puzzle? Do I want to make it impossible to hire people with jobs, kids, or people in different timezones? Would you want to work somewhere that used this sort of thing to decide how to build teams?
Advent of Code for me has always been about creativity, learning new languages, some fun with co-workers. Occasionally after finding the simplest solution, I'll try and work out algorithms with better performance, or even read papers on the underlying fundamental computer science problem involved. But I try to write the code as I would in the day job, and so I always use the example inputs to write unit tests and I always try to refactor stuff to be readable. I'd much rather see code I could imagine working with in real life on someone's GitHub profile than something absurdly terse.
All that said, the puzzles this year have been pretty simple and do seem slightly more optimised to people on speedruns than some years previously, so it's not been that stimulating so far.
Unless you are actively trying to compete, it's pretty unlikely you'd end up on any of the leaderboards. As far as I can tell, it only tracks the top 100.
It is a big turn-off that puzzles are always relased at a time that is convenient for Americans instead of moving across the globe or some other scheme.
The midnight release in the Eastern timezone doesn't help either. I'd assume this is convenient for West coasters. And probably excludes Europeans altogether.
They unlock at 6 in central european time (5 in UK).
This is the only month of the year when I will consistently get out of bed at 5:55, give the puzzles up to 2.5 hours, then have breakfast and get ready for work.
Personally I find it super convenient since I and many of my close colleagues begin work so late, but I also know a lot of people who would be on their commute to work at that time.
Well for me i am in a complete different time zone so I have not the possibility to do it exactly at the start time, even if i wanted it is the middle of the night ... (5 in the morning)
I do it when I have time for it and if I have time for it in the evening I have fun solving it. Last year I did a few once since i remembered these far to late but came till 10 ? challenge and forgott about it again till yesterday, and now try to follow up each day if i have time and want to and dont feel pressured by it it is for fun and maybe learning something new! I actually loved the aspect of writing half a interpreter last year I hope for something similar this year!
I usually just do the most recent year's whenever I have an itch to learn a new programming language, or one I know used differently. I like the problem format and the stars tracking progress.
Yeah - it would be much more cool without leaderboards. I can see myself working on them late in the evening (after work and after I had my rest) but now I would need to get up at 6am to "stay in the game" :D.
Obviously, nobody makes me do that or compete, but it I don't do that, then it just feels like I am just a passive bystander instead of a participant.
I agree with this. I've participated since 2016, but haven't yet completed all 25 puzzles, I usually burn out after about day 8-10 (partly due to the curve in difficulty).
I'm based in the UK, so the puzzles get unlocked at ~6AM which gives an hour or two before work if I wake up earlier than usual.
I think I'll still take part this year, but at a more relaxed pace.
This is correct. I am a PhD student and I know several undergraduates who get up at 5am (or stay up until 5am) to get started on the AoC problems ASAP.
I'm in central Europe and so puzzles unlock at 6 here. I've often thought that this time is nearly ideal for me. But at the same time, had I lived in the UK, it would have been absolute hell to try and be competitive. So an hour can make a huge difference.
I have had days where I've just caved in to the lack of sleep and decided to sleep in the next day, sliding on leaderboards in the process.
I hadn't even realised it was a competition - I'm not really interested in that... I just want to improve my coding exercise ability should I ever interview again and I'm looking forward to it getting absurdly hard and 100% CHEATING. But I will still learn something so I'm happy to be taking part this year!
I did it in 2018 in Elixir when José Valim, the language's creator, has publicly streamed his solutions [1] at mid-day (in Europe), so you had quite some time to resolve them yourself and then later watch him go through them.
I will be forever grateful to him for this opportunity.
It is extremely competitive. Stress is indeed motivational , and it is part of what makes it fun. Once the difficulty starts to increase I can spend an insane amount of time on a simple task. I stay away from these competitions, simply because I don't have time to dig into the material and it makes me feel terrible when I have to abandon a task because of real life, or xyz.
I do have tremendous amount of respect of the developers who completes these on the back of their hands.
Right, whether or not I suck at algorithms really isn't germane in December: I know I suck at preparing for Christmas, and I can think of few poorer remedies for that than taking on a daily algorithm challenge.
So far, I've used it as a way to learn new languages - I've done it in D, C#, Swift so far. I don't bother with the competition aspect, but I do have a few people that I bounce solutions off of.
This year I'm taking a different approach though, I'm going to use it to re-learn an old language - UniVerse[0] Basic[1]. In my first IT job, I supported an in-house system that ran on UniVerse. I then moved on to working with a commercial system built on UniData[2] (a close cousin to UniVerse). These products mainly exist to allow Pick-style MultiValue applications to run on modern systems. They are closed-source commercial products, but there is a limited-use personal edition available.
One nice thing about these is that they don't just emulate a Pick environment, they also give it features that Pick systems never had. For example, UniVerse Basic is capable of making HTTP/HTTPS requests, parsing XML and JSON, and at some point UniVerse Basic even gained the ability to interact with Python objects. One of the first things I built in preparation for this was a subroutine to retrieve the input data, downloading it and caching it if required.
It's been about 10 years since I've worked with this technology, so I'm really looking forward to re-learning it.
I do the same thing! One year, it gave me a reason to really understand Javascript. Another time I learned Go. Last year, it was the 50th anniversary of the release of the first PDP-11, so I'm still going through and writing the solutions in bare-metal PDP-11 assembly.
How are you finding the challenges in PDP-11 assembly? Are there any unique things about the PDP-11 instruction set or architecture that helps in the challenges?
Interestingly, the space of potential programs was small enough (105 bits) that you could smash an SMT solver into it and solve for a correct, minimal program, given the shape of your obstacle course:
https://www.mattkeeter.com/projects/synthesis/
I had this plugin installed for a long time. The idea is really awesome, but it's just like debugging frameworks. Most of the time I just use console.log, or in this case the debugger instead of powerful tools.
Also many times I don't actually debug loops and list mutations.
But I also think that it's a missed opportunity to measure productivity and runtime performance of programming languages.
In "Benchmarks Game" you get extremely optimized programs which is not usually achievable by an average developer. On AoC you would get a wide distribution of submissions and you could get the right estimate of how performant are the programs submitted by PHP, Python, or Rust developers.
I guess we wouldn't get to see people abusing mpz_add on Python or PHP that often.
You get what people choose to contribute — if you want to see Python or PHP programs that don't use GMP then please contribute those programs.
The benchmarks game shows up-to 10 programs for each language implementation: some will be all about performance, some will be about the language, some will be about program size.
This is a great opportunity to use new languages and get a feel of how communities take on newcomers! I tried Clojure a few years back and I got a warm welcome, so I'd like to extend an invitation to anyone considering using Clojure for this year's puzzles, head over to the Clojurians Slack and the #adventofcode channel and hang out. There's no competitiveness and people just like to compare notes and discuss approaches. A lot of Europe representation too!
I've always struggled with AoC in the past due to work and family commitments, but this year is different. I now have a work from home, flexitime job and thus so much more time to devote to it. Hope to make it beyond day 2 at least!
Anyone else planning on rotating languages? I think that using a different language every day is too much of a stretch for me, but I plan to switch between C++, Rust and Julia.
I did 20 languages last year and I'm gonna stick to 1-2 this year :P Wasn't exactly stressful, but it didn't help being quick and having nice solutions. Wrote a summary: https://f5n.org/blog/2019/advent-of-code-2019/
Julia, Common Lisp, and maybe Haskell or something like that for me. I use Julia to get the correct solutions in a naive way, then I optimise the approach and compare the efficiency of the languages. It's pretty fun. Yesterdays puzzle was 0.0002 in Julia and 0.002 in CL for the exact same approach.
that damn intcode computer from last year is still giving me PTSD. Had a bug in it that somehow didn't show up in the first three challenges, took me forever to find.
I had one of the funniest bugs in my whole programming life with intcode. It manifested itself in the last puzzle (iirc) with the maze. I did not find a single item, because the instruction used for this was buggy and the rest of the problem was 100% ok.
Is Advent of code used mainly for learning new languages? I find it very different from Topcoder style questions. I tried 3 puzzles and part1 and part2 in each question makes this a good test of how to design code in order to make it extensible for future changes. Can someone who has done Advent of code in the past confirm if learning new languages and making your code extensible are the primary objective of AoC questions?
It's themed and timely. I'm not a great coder and don't have a CS background, but have found it very useful to stretch my abilities to the limit. Its much more friendly than any code competition site I've been on, abs also it's useful being able to go to reddit to get tips or occasionally help others (very occasionally in my case!).
The quality of the puzzles. They're 2-stagers, where the first stage is generally to ensure you've understood the problem, and the second is a "twist" like a vastly expanded problem space.
The factor that makes it stand out for me is the community. The r/adventofcode subreddit is very active every year and not only can you learn programming tips and tricks from others but you can also see people doing crazy things like solving problems in tools like Excel, Git, etc. and cool visualizations of their solutions.
My experience of other coding challenges is: here's the problem, and you submit source code which is run against many test cases to confirm correctness. Advent Of Code only asks you to submit an answer, not source code.
I wish the scoring didn't prioritize timeliness. This is really the sort of thing I'd enjoy cranking out on a plane ride, not every day.