I've been thinking for a while now about building a system like this. There is one particular set of traffic lights on my commute which causes 45-60 minute delays at peak times. I've wondered how some small changes to timings of the light might reduce some of that, and was thinking about building something to simulate it and test my theory.
The other thing that is interesting about traffic patterns is how truly connected the entire network of roads are. You can have a collision on a major road which has a big impact on roads several miles away. I've always thought it would be fun to visualize that somehow, to determine not just the immediate effects of a change to a road system, but how it effects other commuter routes.
Definitely going to play around with this. Kudos to the author.
Before you get too excited about possible performance wins, note that gccgo turns out to be much slower than gc (the standard Go compiler) on a lot of real workloads. Here's one such benchmark, taken recently, though it's admittedly a microbenchmark [0]. Dave Cheney found similar results, though quite a while ago [1].
gccgo didn't support escape analysis for quite a long time, which meant that the performance cost of the increased heap allocations absolutely dwarfed whatever performance gain you got from GCC's smarter optimizations. I think it's recently gained support for escape analysis—it's actually quite difficult to find information about this, and whether you need a compiler flag to enable it, etc.—but I don't think that's going to tip the performance scales in gccgo's favor. EDIT: See below: 8.1 ships on-by-default escape analysis!
The primary motivations for gccgo, at least to my understanding, were a) having a second reference implementation that could find bugs in gc and the language specification, and b) support for esoteric platforms that will likely never make it into gc. The announcement of gccgo has more information about its motivation, though it makes some claims about performance that haven't stood the test of time [2].
GCC 8 provides a complete implementation of the Go 1.10.1 user packages.
The garbage collector is now fully concurrent. As before, values stored on the stack are scanned conservatively, but value stored in the heap are scanned precisely.
Escape analysis is fully implemented and enabled by default in the Go frontend. This significantly reduces the number of heap allocations by allocating values on the stack instead.
I read this as "if it is on the stack and looks like a reference when you squint your eyes, then we treat it as a reference" compared to objects on the heap where the GC seems to know the exact location of all references.
Right that's the standard conservative garbage collection. However, new results show that it's significantly worse than a precise collector (reference pending).
I'd love to see that reference, as I've not yet seen an example where this was true in practicd, but know several examples where it is definitely false (e.g. we use a modification of the Julia GC which uses consevative stack scanning to allow interop with another system, and it seems to perform very well).
I think that only scanning the stack conservatively should not change much from a completely precise GC. The chances of misidentifying a potential reference to an enormous data structure that is collectable are pretty slim in a memory region as small as a typical stack.
Scanning the entire heap is different because of its generally much bigger size. It's far more likely then to see references where there are none.
Sadly, those benchmarks (at the bottom of the thread) were done with a GCC that already included all the GCC 8.1 bells and whistles - results were unchanged under 8.1.
I can see why this would be beneficial to a small number of people but I can't imagine a lot of people needing this feature. Isn't Intel Quark no longer supported or at least isn't a product line that Intel is giving much effort to?
As Google has been paying Ian Taylor to write the GCC Go support since the very early days of Go I'd say you could consider GCC to also be a "standard" Go compiler.
While I also find the surge of electron-based application somewhat amusing, you can't deny that it has made it easier for more developers to develop cross-platform applications without having to develop and manage several codebases with different languages.
I'm working on a side-project at the moment, a large part of which is a desktop application. Using electron allows me to get a cross-platform app out the door pretty quickly. If I had a team of more than one, perhaps I would consider using something else, but as long as it is just me electron just makes sense.
ActiveRecord. Referencing it is a double-edged sword; Diesel is very much not ActiveRecord, so by saying it up front, people tend to get the wrong impression.
IThat's cool, didn't realise it was the same author.
I understand. The other side of that coin though I guess is that knowing this might give some more confidence in using diesel. ActiveRecord was the first ORM I used, and knowing now that diesel is "from the same stock" (for want of a better phrase) gives some extra confidence in what it can do / where it is going.
Indeed, the flood of Rails people who went over to Rust have definitely given some people pause to consider the direction it was taking.
I don't see this particular case of someone who spent a significant amount of time coding core Rails OSS library being seen as a negative though, quite the opposite. If anything it would be an excellent measure of the language for that particular use case assuming any competent developer would seek to operate in the constraints of the language rather than shoehorning an existing implementation into something that doesn't fit.
The bigger question is whether the ORM label itself is limiting or can be understood to fit into a broader scope of implementations.
I think people read way too much into the ORM label. Diesel is very much a query builder first, not an ORM. As a random data point, I've been fixing bugs in Rails lately by basically copying over code from Diesel and converting it to Ruby. That code has all been in Arel not Active Record.
I agree that the "I make Rails" thing is a double edged sword. I would hope people would see it as an indicator that I have a lot of experience and lessons to learn from, but I think many people see the opposite.
> What's your advice for indie hackers who are just starting out?
> Honestly my single piece of advice would probably be to stop looking for so much advice. Shut the fuck up and go and build something.
I like this. I see a lot of people, and I fall victim to this myself, over analysing the best way to do X, rather than just trying it and learing / adapting as you go. I think there is some value in figuring things out up front, but not at the cost of never taking the plunge.
The little success i have in life is largely due to this exact model. I have home-built so many big grandiose apps and solved many problems learning them myself. I didn't make any hits, hell i didn't even finish half of them, but i largely solved a lot of problems myself, figuring and learning.
Now (for better or worse) i think i'm at the stage where i need to actually improve and deploy them fully. Learn maintenance, learn upkeep, maybe even learn some minor advertising and metrics.
Self taught experience has been so insanely valuable. Value derived from pain.. but still value.
Yeah I think at the start there is a lot of value in just starting, perhaps without even finishing. It's that stage of just getting over the first hurdle and getting into it.
That's fine for a while but then as you say you get to the place where you have to finish. I think this can be almost as big of a hurdle as starting as it can involve a whole host of new challenges, some of which you've mentioned above.
Finishing is also, in my opinion, typically not as exciting. You've likely already solved all the intersting problems and are left with the "lesser" tasks involved in getting it over the line. This is something I've been learning myself over the past year or two.
Nice to hear you improved your technical skills. But your next step is probably not extending those skills more, by learning maintenance etc. Your next step is going to market.
Re-read the article: only a tiny part of it explains the technical challenges/mistakes. Most of it revolves around finding the problem people face, presenting them the right solution, etc. Checking if people are willing to pay for such a solution.
As said in the article:
> Market-specific experience is probably the biggest and most under-valued asset that you can bring to any project. Few people ever talk about it.
Solving the technical problems is the easy part. Knowing which problem to solve, which solution to offer, how to get the right people to notice you, how to get them to give you money. That is the hard part. Start learning that part now.
One of my favorite hobbies is writing. But it's so easy to get stuck into overthinking it, reading about writing, reading how other people write, etc etc etc...
Instead, I've noticed the best feeling comes from staring at a blank page and just letting the mind work things out in real-time. Sometimes it sucks (maybe most of the time), but it seems like there's plenty of times where it also goes really well.
Totally agree. What I've found too is that once you actually start and get /something/ down on the page (or whatever the medium) you actually find yourself getting into a bit of a productivity flow. It might start off bad, but I find I quickly slip into actually getting stuff done once I get over that initial paralysis.
Yeah! I was surprised by the amount. When I read things now about compiler speed, I'm looking through a different lens now that I have an idea of the sheer amount of work it's doing. Pretty impressive.
> I'm curious how someone made this outlandish claim sound plausible.
From the article:
> Soviet television had, up to that point, been regarded by its audience as conservative in style and content. As a result, a large number of Soviet citizens (one estimate puts the number at 11,250,000 audience members) took the deadpan "interview" at face value, in spite of the absurd claims presented.
I guess this had something to do with it? If people were used to TV being a certain way, and this "interview" was presented in the same way, then it would seem reasonable that it would be taken up in the same was as every other. Nothing (aside from content) made it stand out from other shows.
Saying that, the claim is crazy, and it does seem strange that it was seemingly accepted as truth. I wonder how a similar stunt could be attempted today, and what the reaction would be...
I've been thinking for a while now about building a system like this. There is one particular set of traffic lights on my commute which causes 45-60 minute delays at peak times. I've wondered how some small changes to timings of the light might reduce some of that, and was thinking about building something to simulate it and test my theory.
The other thing that is interesting about traffic patterns is how truly connected the entire network of roads are. You can have a collision on a major road which has a big impact on roads several miles away. I've always thought it would be fun to visualize that somehow, to determine not just the immediate effects of a change to a road system, but how it effects other commuter routes.
Definitely going to play around with this. Kudos to the author.