Hacker News new | past | comments | ask | show | jobs | submit | more keyle's comments login

Bitdiana Jones. 16th page of Google, that's dark, humid, full of snakes. Better bring a torch!


"Everyone, chill"

then proceeds to drop emphasis, F words and other strong words.

Maybe take a page of your own book.


I read “everyone chill” to mean “there’s a lot of bad takes here”, not literally “remain calm”. In that context, I think the comment makes sense — they’re illustrating just how wrong the common take is


I've literally never heard that expression before


I have, now what do we do? Which of us controls the language?


Whoever is able to use it to communicate ideas.


Alas, we are no further. Remember how we got here: you opened with what barely qualifies as an anecdote. Not remotely an idea. I'm trying to be fair.

How about we go with the rock-paper-scissors suggestion?


Rock paper scissors for it


I really don't like this medium for it, when/where? :)


Someone learns something new everyday!


The substance of his comment is still well-intentioned though. I agree with him.


Random person writes completely unsourced negative comment about someone HN dislikes for no reason. HN cheers before launching in a pointless and already rehashed a thousands times discussion about tipping culture.

I don’t know if it’s well intentioned but it’s peak HN.


Chill because you all are giving me anxiety that I don't need.

Idk how to tell you this, but this is a pretty common pattern for human speech. You see it in plenty of countries, plenty of cultures. Welcome to Earth. Words mean more than the literal dictionary interpretation of them.

Edit:

I only have sass in me tonight.


The workweek just started, you gotta save some sass for the rest of the week too.


I'm a grad student and ABD (all but dissertation). My work week has no beginning nor end. But don't worry, I still got plenty in me.


Aren't we all just tired of arguing the same points?


My only experience with Rust has been synchronous mostly with little to show for in terms of async. And I liked Rust. When it ran, I was damn sure it was going to run. There is comfort in that: "Tell me what to fix". The clippy stuff etc. was great too.

I read the 3 parts of this website and 'Wowsa'... I'm definitely not going in that direction with Rust. I'll stick to dumb Go code, Swift or Python if I do async heavy stuff.

It's hard enough to write good code, I don't see the point of finding walls to smash my head into.

Think about it, if you write a lot of async code, chances are you have a ton of latency, waiting on I/O, disk, network etc. Using Rust for this in the first place isn't the wisest since performance isn't as important, most of your performance is wasted 'waiting' on things. Besides Rust wants purity and I/O is gritty little dirt.

Sorry my comment turned into a bit of a hot take, feel free to disagree. This async stuff, doesn't look fun at all.


> My only experience with Rust has been synchronous

It is a shame that the dominance of the "async/await" paradigm has made us think in terms of "synchronous" or "async/await"

> Think about it, if you write a lot of async code, chances are you have a ton of latency, waiting on I/O, disk, network etc

Yes. For almost all code anyone writes blocking code and threads are perfectly OK

Asynchronous programming is more trouble, but if trying to deal with a lot of access to those high latency resources asynchronous code really shines.

The trouble is that "async/await" is a really bad fit for rust. Every time you say `await` you start invisible magic happening. (A state machine starts spinning I believe in Rust - I may be mistaken)

"No invisible magic" was a promise that Rust made to us. What you say is what you mean, and what you mean is what you get.

No more, if you use async/await Rust

I really do not understand why people who are comfortable with "invisible magic" are not using a language with a garbage collector - that *really* useful invisible magic.

Asynchronous programming is the bees knees. It lets you get so much more from your hardware. I learnt to do it implementing telephone switching systems on MS-DOS. We could run seven telephone lines on a 486, with DOS, in (?) about 1991.

Async/await has so poisoned the well in Rust that many Rust people do not understand there is more to asynchronous programming than that


I notice this as well; there is a false dichotomy of "Async/Await" or "blocking". I see this in embedded too. I think a lot of rust embedded programmers learned on Embassy, and to them, not using Async means blocking.


Is the alternative the more traditional spawning threads and using channels, or is there another paradigm? That's definitely something I'd be interested in learning more about.


I think they mean that there is more than one asynchronous paradigm. Actors is one alternative I can think of.


Where an actor has its own thread and communicates with a channel, right?


Not necessarily, a lot of actors can be sharing the same OS thread and be preemptively switched away from when they hit a certain limit of instructions or start an I/O call. Erlang demonstrates this very well because it works there without a glitch 99.99% of the time (and the remaining 0.01% are because of native code that does not yield control).


There is also readyness-based polling in a loop, without the async/await sugar.


can you elaborate on this?


Writing programs that perform multiple tasks asynchronously can be done in many ways. It is a broad and important tool in software, and the Async/Await language is only one way to do it. Examples:

  - Multiple cores
  - DMA or other dedicated hardware
  - GPU programming
  - Distributed systems (e.g. the CAN network in your car)
  - Threads
  - Interrupts
  - Event loops
  - Coroutines


>very time you say `await` you start invisible magic happening. (A state machine starts spinning I believe in Rust - I may be mistaken)

It's more that an async function in Rust is compiled completely differently: it's turned into a state machine at that point, with the code between 'awaits' being the transitions. In and of itself, it's not actually particularly difficult to grok (I'd say you have about as much an idea of what the resulting machine code looks like as with an optimized non-async function), the headaches are all in the edges of what the language can currently support when compiling under this model.


> "No invisible magic" was a promise that Rust made to us.

Honest question, where did you get that promise from?

The 1.0 release didn't really emphasize that: https://blog.rust-lang.org/2015/05/15/Rust-1.0.html

The current rhetoric is more about empowering more people to have confidence in systems programming: https://doc.rust-lang.org/book/foreword.html

Some of graydon's ideas starting almost a decade before 1.0 might have included that https://github.com/graydon/rust-prehistory/blob/df8cc964772b...

but his recent posts on what he would have done differently if he was BDFL include a bunch of stuff that's arguably more magical, not less: https://graydon2.dreamwidth.org/307291.html


No invisible magic sounds more like something I've read about zig, maybe they were confusing it with that?


However, if you need a state machine, async/await is a super-elegant way to express it. No other language provides as nice way to do it.

I believe best you can do in other languages is using continuations as the state.


C# async transform into a state machine too.


State machines are the easy part.


Performance is not only wall clock time. With high latency, I/O bound tasks, the cost will be often determined by how much memory you need. And in the cloud, you can’t pay for memory alone. The more memory you need, the more vcores you have to reserve. You might end up in a situation your cpus are mostly idling, but you can’t use fewer cores because of RAM needed to keep the state of thousands of concurrent sessions. In this case Rust async can bring a tremendous advantage over other languages, particularly Java and Go.


> you can’t use fewer cores because of RAM needed to keep the state of thousands of concurrent sessions. In this case Rust async can bring a tremendous advantage over other languages, particularly Java and Go.

Can you elaborate on that? What about green threads?


Green threads can use more memory than async if you have a lot of I/O operations in flight since the stack is not precisely sized at pause points. Similarly, switching in new stacks is a bit more expensive than continuing a paused async task. Implementation details matter of course.


Right, that's interesting. Still curious to know if that is "slightly better" or if that qualifies for the "tremendous advantage" mentioned above!


It’s hard to say. If you want to use green threads, you may want to look at the may crate [1]. The reason Rust prioritized stackless async just like C++ did is that it fits better the systems programming needs (i.e. it’s syntactic sugar for a state machine you could hand implement) while not preventing things like stackful coroutines.

If Rust manages to solve the coloring problem of async (e.g. by adopting effect systems [2] or alternatives), then stackful and stackless coroutines syntactic sugar could conceivably exist within the std language (perhaps leaving out stackless on nostd).

The reason you don’t see both stackless and stackful coroutines in a single language like Rust is the coloring problem is made 50% worse.

[1] https://crates.io/crates/may

[2] https://blog.yoshuawuyts.com/extending-rusts-effect-system/


Note that May has soundness issues that the authors handwave away. You can get UB in safe code while using it.


You mean the TLS issue called out or something else?

I wasn't trying to recommend may specifically of course. Or are you saying that stackful coroutines must have soundness issues due to missing language features to make it safe?


The TLS thing, yeah.

I am unsure if it's inherent to stackful coroutines or not, it's been a minute since I've dug into that.


Yeah, for TLS to work safely I suspect the only way would be language support which knows to media TLS through a "context pointer" so that the green threads could restore the context when resuming. In C++ the story is even worse because someone could take the address of a TLS variable & use it elsewhere. I think in general it's very very tricky to mix TLS and stackful coroutines that can be scheduled on arbitrary threads and languages like Go pick the stackful coroutines and drop the ability to do TLS.

To be fair though, I think people generally just avoid TLS when running with green thread systems.


Having written a lot of asynchronous code in python and in rust, I’d take Rust any day. If it compiles, it works.

I also don’t think it’s hard to reason about in practice. Tutorials tend to get much deeper into the weeds than you typically need to go.


Maybe Rust has improved in the last 13-14 months but last time I needed to do a lot of async code I ended up with a browser session with at least 12 crates on docs.rs where only a small part of the picture was explained. It was absolutely terrible to track down what's the problem.

In the end, very helpful (and hardcore -- like the main author of Tokio) people unblocked me. I am not sure I was left very enlightened though; but I likely didn't stay for long enough for the whole thing to stick firmly into my memory. It's likely that.


It has improved a lot, yes. impl in return position in traits makes writing async traits much, much easier, as you can avoid a large number of Boxing and lifetime issues when you don't need a trait object (which is what the async_trait crate does under the hood).

I also think you've really got to be willing to be pragmatic when writing async code. If you like to do functional chains, you've got to be willing to let go of that and go for simple imperative code with match statements.

Where I find it gets complicated is dealing with streams and stuff, but for most application authoring use-cases, you can just await stuff that other people have written, or throw it into a `join_all` or whatever.


> If it compiles, it works.

This slogan sucks. If it compiles, it type checks. Yes, Rust has a more sophisticated type system than Python with annotations, so it catches more type errors.

No, the type system cannot prevent logic bugs or design bugs. Stop pretending it does.


Well, for stuff like "does my async code actually run," or "will the program crash," the slogan is generally true. Or, versus C++, "are all of my references valid, or will my program segfault."

Obviously a type system cannot catch all your logic errors, but you can write code such that more of the logic is encapsulated in types, which _does_ help to catch a lot of logic errors.

There's a strong qualitative difference working with the Rust compiler versus working with Python or C++. Do you have a better suggestion for how to express that?


I made the suggestion! "if it compiles, it type checks". Rust's type system is much more sophisticated than that of C++ and Python, and that is the difference you're gesturing at.

Also, no, the Rust compiler will happily pass code which will crash your program, all it takes is an out of bounds array access. That's the kind of puffery many of us are tired of. The "if it compiles, it works" slogan is, bluntly, wrong.


I mean, if it compiles it typechecks is kind of tautological, and not particularly effective as a saying or slogan. I feel like you miss the point of a phrase like that, which like any aphorism, slogan, saying, or rule of thumb is not about representing reality with 100% accuracy but instead about conveying an idea.

The only two languages I’ve worked with that gave me the feeling that I could generally trust a compiling program to be approximately correct are Rust and Haskell. That difference relative to other languages is meaningful enough in practice that it seems to me to be worth a slogan. I believe it’s meant to be more of a “works, relative to what you might expect from other languages” kind of thing versus, “is a completely perfect program.”

And, if you care about maximizing the “if it compiles it works” feeling, it’s possible to use .get() for array access, to put as much logic in the type system as is feasible, etc. This is probably more idiomatic and is generally how I write code, so it does often feel that way to me, regardless of whether it is completely, objectively, literally true.


> if it compiles it typechecks is kind of tautological

It's not tautological at all, because the type system in Rust and Haskell is not a trivial condition of the language.

> not particularly effective as a saying or slogan

Neither is "if it compiles it runs", rather less so in fact, everyone is sick of hearing it, and rolls their eyes so hard it's actually audible.

Every one of these 764 bugs compiled and passed type checks:

https://github.com/tokio-rs/tokio/labels/C-bug

Not picking on tokio in particular, mind you, finding and fixing bugs is a sign of quality in a library or program.

> I believe it’s meant to be more of a “works, relative to what you might expect from other languages” kind of thing versus, “is a completely perfect program.”

Which is why I describe it as meaningless puffery. What you're saying here is that you know full well it isn't true, but want to keep saying it anyway. My reply is find a way to express yourself which is true, rather than false. I bet you can figure one out.


> the type system cannot prevent logic bugs or design bugs

^ your words, that statement is false. the type system _can_ prevent logic bugs or design bugs, exhaustive pattern matching is an obvious example.

I bet you can find a way to express yourself which is true, e.g. "the type system cannot prevent _all_ logic bugs or design bugs"


asyncio is its own special kind of hell. I understand there are significantly better async libraries for python, but being built-in it is what you end up reaching to.


What is bad with asyncio? Genuinely interested.


From the top of my head: lack of structured consistency, error prone , atrocious stack traces, and (and this might be a skill issue on my part) confusing task cancel semantics.


>Think about it, if you write a lot of async code, chances are you have a ton of latency, waiting on I/O, disk, network etc. Using Rust for this in the first place isn't the wisest since performance isn't as important, most of your performance is wasted 'waiting' on things. Besides Rust wants purity and I/O is gritty little dirt.

But isn't most code going to perform some I/O at some time? Whether is calling an API, writing on disk or writing to a DB?


I am mostly in agreement -- I moved to Golang for not-super-complex async programs because I value my time and day-to-day productivity more than the super strictness of Rust (which I love and I wish more languages had it, though in a more approachable manner).

Rust was a legendary pain to untangle when learning to do async, though as I admitted in a comment down-thread this was also because I didn't stay for long enough for everything to cement itself in my head. It was still an absolute hell to get into. I needed help from Tokio's author to have some pieces of code even compile because I couldn't for the life of me understand why they didn't.

...BUT, with that being said, Rust has a much smaller memory footprint and that is an actual and measurable advantage on cloud deployments. It could be painful to make it compile and run but then it'll give you your money's worth and then some. So it's worth even only for that (and "that" is a lot!), if you are optimizing for those values. I plan to do that in the future. In the meantime Golang is an amazing compromise between productivity and machine performance.


Performance isn't wasted. More waiting and less CPU time also means longer battery life / energy efficiency.


That is nice of you to offer, but your profile nor this message provide any way of contacting you.


True, but googling soufron brings up a popular lawyer with that last name, whose social media and website use the same handle, and even has a wikipedia page. So I'm assuming he knows he's easy to find.


Sounds good, I should rename myself to JackNicholson so you can google my authenticity and take it at face value.


Unlike linking to Nicholson's socials from your HN account, which would totally count as proof...

We simply aren't discussing how to prove his identity, just how to contact him (or whoever he could be impersonating) when he didn't provide contact info.


So make sure you introduce this in your company; become the subject expert in it, invite everyone to very cool sessions demonstrating how cool it is, and secure your job until the next rewrite!

Looking at the examples, 1/5th of it looked neat, which probably would be best to submit a proposal for |> to the typescript committee rather than write another alienation.

This is a solution looking for a problem to solve. It introduces an alternative and does so as a superset; which is not only dangerous to existing code bases, but also silly.


> It introduces an alternative and does so as a superset; which is not only dangerous to existing code bases...

How so?


An alternative way to do something in a code base, for no other reason than style, is a recipe for technical debt. It builds up a larger cognitive load on the developers navigating and working the code base and induces more chances of bugs via half-refactors, or half-considerations.

As for the superset comment, I meant that if you introduce a completely different language, you probably have a valid reason; e.g. it does something different. Adding to an existing language with a superset without any need for it is also dangerous. It's not like it's a DSL at a higher level helping people get repetitive or scriptable things done faster. It's only an alternative, leading people down rabbit hole and second guessing with a lot more to remember.


I see. I misinterpreted your meaning of the word "existing" - I thought you were implying that, by virtue of the introduction of the alternative, any existing codebases would _immediately and instantly_ be worsened, whereas it seems that you're saying that continued (future) development _on_ existing codebases would deteriorate due to those alternatives. That's a fair criticism! I don't fall on the same point in the "alternatives provide better-focused tools <-> alternatives create cognitive load" spectrum as you do, but I see your point.

> It's not like it's a DSL at a higher level helping people get repetitive or scriptable things done faster.

I mean, I have to disagree with you there, because several of the comparisons on the main site page show >50% line-count reduction, and many of those that don't have (subjectivity-alert!) representations that more-clearly match my mental model of how the code works - so not only would I be executing fewer keystrokes to encode my idea (not the main bottleneck, but not irrelevant!), but also I'd be carrying out fewer mental translations to get from idea to code.


This is interesting. I wonder if this will get in hot waters with Nintendo. They're not the gentle kind when it comes to IP.

I'm guessing they'll wait for them to start shipping products before they invest in the legal show stoppers.


As long as it is not called Nintendo 64, there is no way Nintendo can do anything.

The cartridge slot and the hardware architecture cannot be protected under copyright and any patent filed at the time would have expired anyway.


They already sell a Game Boy-like and a SNES-like.


This is great for the folks running serverless compute! You get to start a process and let it hang until your credit card is maxed out. /s


That was before DBOS -- the serverless platform that bills you only for CPU time, not wall clock time ;) see https://www.dbos.dev/blog/aws-lambda-hidden-wait-costs


I don't see how this pricing (or product in general) is any better than cloudflare workers.

To be clear, I am not trying to be mean, I'm just curious to hear why I would pick this over cf.


So... do they not charge for sitting idle and consuming memory?


Finally I can write code faster than LLM! /s


Now we just gotta combine Hacker Typer with GPT streaming a token of code on every key press.


Pure genius. Finally, an opportunity to parley my effusive laziness into a promising career as a “coder!”

BTW I hate that term with a passion.

As someone who has crafted software for the last 45 years, it’s like implying that learning the alphabet makes you a Tom Robbins.

Writing software is about understanding the problem and the information involved in such a way that you can craft algorithms and data structures to efficiently and reliably solve those problems. Putting that into code is the smallest, least important part of that skill.

That’s why software engineering is mostly language agnostic. Sure, there are paradigms that fundamentally change the way that the problem space is handled, but at least within a paradigm, languages are pretty much interchangeable. Some just have less hidden footguns.

Interface design is another thing altogether, And is either fruitless drudgery or fine art, depending on your predisposition.

There is definitely room for a subclass of brilliant interface designers that do not need deep data manipulation intuition …. But they do need to have a deep understanding of human nature, aesthetic principles, color theory, some understanding of eye mechanics and functional / perception limitations, accessibility/disability engineering, and a good dose of intuition and imagination.

In short, nothing about producing quality software is something that you gain by writing code snippets to solve simple problems. But you do have to learn the alphabet to write, and “coding” it is still a prerequisite to learning to write software. It just shouldn’t be sold to people as if it was some kind of degree lol.

Give me some kid who’s been building things with Arduino in her basement for a few years over a coding bootcamp graduate any day. I can teach her how to write good code. I’ll just pull out more of my non-existent hair trying to reach the “coder” to actually solve problems, unless I get lucky.


The fact that someone entered software engineering through either building things with Arduino or through a coding boot-camp does not indicate what is their potential when it comes to software engineering.

I've seen people who are really great at combining "recipes" off the web for anything (including hobby electronics and programming), but never really get to the bottom of things or develop clear understanding of how things work and tie together.

I imagine you'd only get more out of that kid toying with Arduino because of persistence ("few years"), and not because of the type of things they did, but I ultimately believe you'll have similar chances of developing a great software engineer out of any of them in general.


You’re right on about the time part, that was definitely a big part of what I meant.

You can start anywhere, and coding boot camps are useful, just as following YouTube tutorials. But until you learn to identify, quantify, and characterise the problem and data space you aren’t really doing the job of software engineering.

My experience is that many people are deceived into thinking that language fluency is the core skill of software engineering, and coding bootcamps tend to foster that misrepresentation.

That doesn’t make them bad. It just means that often, thrashing around with no real knowledge of the tools and solving a problem with the tiny set of syntax you can get to work is much, much more educational towards the goal of becoming a software engineer than getting a good grasp of the language as it pertains to solving toy problems that require little effort to characterise.

Anyone that is willing to hack around a problem long enough that the solution emerges is doing the real job.

It doesn’t matter where they start, or how much they “know” about “coding”. The real task is to fully characterise the problem and data structures, and sometimes that emerges from the horrific beast that rises from the tangled mess of a hacking session, a malformed but complete caricature of the actual solution.

Then, all you have left to do is to code the actual solution, the structure of the algorithm and data has been laid bare for you to see by your efforts.

That, I believe, is the essence of software engineering.


No disagreement there, but one other thing I'd throw in for learning coding is that it increases confidence for someone embarking on this journey, and sometimes that's all the motivation they need to dive deep and persist.

I've obviously seen people who misjudge this (they can code, hire them), but ultimately, developing someone requires an amenable minds of both a mentor and a mentee on top of talent and persistence.


You’ve got a good point. I honestly hadn’t considered the self confidence angle. I’ve always just been stupid enough to expect to be good at whatever I am willing to put time in to learn. Sometimes, it doesn’t work out, but I can usually write it off to “lost interest” even if I suspect intrinsic incompetence lol. I mean, demonstrated intrinsic incompetence is a good reason to lose interest, right?


To the untrained eye: "Coding is to a software engineer, as cutting is to a heart surgeon."


That is a fantastic analogy.


Genius, here comes the funding!


Maybe to highlight the fact that it already made the homepage.


Oh okey, thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: