@JohnAtQNX can you remove the "everybody needs an account" restriction from your website downloads, and also remove the license mechanism from qcc? That would go a long way towards encouraging hobbyists to try it out.
Your money comes from high-volume licensing deals, where you presumably get a per-unit license fee. That revenue stream is jeopardized by pissing off the CI/CD teams that have to support QNX product lines.
Hiya! We've discussed how to make that work. It's good to hear the same thing from an outside voice. I'll bring it back to the table to inch it along. Thanks!
I will echo this. Back around 2000 I recall getting a QNX live CD with a magazine. That was the ultimate in low friction, I didn't even know QNX was a thing but a copy of it landed in my lap so I gave it a shot.
At the time it didn't support my sound card or modem so it was dead in the water for me but it was an interesting experience that earned the platform a near mythical spot in my mind for decades following.
Keep friction low if you want people to try your thing.
This is in the same category as VPN vendors that require a login to download their VPN client.
When you're a consultant or a random sysops person in a huge enterprise, that's infuriating. There is zero benefit to the vendor doing this, nobody can ever benefit from "pirating" a free component that can't be used without another paid component.
I've permanently blacklisted vendors for pulling this kind of thing.
Most recently Crowdstrike: I was up all weekend doing emergency server rollbacks and they had the nerve to publish critical information behind a login!
Tip: If your company ever uses the word "you" in any communication of any kind, then that company has made a serious error of understanding. There is no "you" at an enterprise customer! There's teams of many people each, and not all of them are even aware of who has an account and for what.
Or at the very least, allow people to log in using other services, like Google, et al, that plenty of other sites are using.
I'm loathe to keep adding new logins every time something new comes about. Same with news feeds, where nearly half of all news sites want you to log in, even if it is "free." At least in that case, it's easy to just go elsewhere for the same story.
Yes, for sure. FPGAs are a great way to move around and crunch large amounts of digitized data. But there are other times where that doesn't help much because you're doing something like a submillimeter analog front end or something.
It's true that most FPGAs have limited built-in analog capabilities. But good DACs and ADCs aren't too expensive, and an FPGA can control them with exceptional precision. Does your process have some kind of input / output that can't be handled that way?
Dedicated DACs/ADCs will almost always offer better performance than the ones you'd find on a microcontroller or even an ASIC.
You don't even need DACs or ADCs, you just need an SRAM cell leaking current through an electrode. The process is entirely separate from that. By analog, I don't mean the signal, I mean the chip is interacting with the world physically. I need the electrodes physically touching chemicals. Those chemicals are incompatible with the aluminum or other normal metals on those connections - you pretty much need platinum or the metal from the chip will screw up the chemical reaction.
You also need large amount of input/output - a good start on a chip would be about 1,000 to 10,000 electrodes. I think it is going to be difficult to put that many on an FPGA.
I use Chatgpt for coding / API questions pretty frequently. It's bad at writing code with any kind of non-trivial design complexity.
There have been a bunch of times where I've asked it to write me a snippet of code, and it cheerfully gave me back something that doesn't work for one reason or another. Hallucinated methods are common. Then I ask it to check its code, and it'll find the error and give me back code with a different error. I'll repeat the process a few times before it eventually gets back to code that resembles its first attempt. Then I'll give up and write it myself.
As an example of a task that it failed to do: I asked it to write me an example Python function that runs a subprocess, prints its stdout transparently (so that I can use it for running interactive applications), but also records the process's stdout so that I can use it later. I wanted something that used non-blocking I/O methods, so that I didn't have to explicitly poll every N milliseconds or something.
Honestly I find that when GPT starts to lose the plot it's a good time to refactor and then keep on moving. "Break this into separate headers or modules and give me some YAML like markup with function names, return type, etc for each file." Or just use stubs instead of dumping every line of code in.
In my experience taking both, Greyhounds are significantly faster and far less comfortable. Maybe it's different in other places in the country, but this was my experience in coastal California:
Greyhound buses run more-or-less on time. The routes are direct, they're predictable, and they will take you where you want to go. They also don't have a dining car, spacious bathrooms, room to walk around and stretch your legs, etc. If somebody has diarrhea, everybody on that bus is going to know it and smell it.
Amtrak is more comfortable in every way. It's also usually late, and subject to delays because of low rail priority. You can't really count on it for anything other than "it'll leave eventually" and "it'll get there eventually". While on the train, it's really quite pleasant - as long as you don't care about arriving when the schedule said you would.
I had essentially the same experience–I did SJC to SBA a few times a year when I was in college (which for the non-Californians here, is a destination that does not really have an airport you would want to use). I took Greyhound the first few times and then switched to Amtrak, even though it took 2-3 hours longer. I'll take that every time just by being able to get up and walk around. And the views coming in around Point Conception as the sun sets are priceless :)
I would love to but it was (is?) quite expensive to fly out of. I think if you book far enough in advance the prices are within a factor of two or so but I generally didn't bother. The one time I did use it was when Google flew me out for an onsite on two day's notice–so I can confirm that it is an excellent airport, with security that takes all of two minutes and a really pretty 45 minute flight to SFO–but the final bill for that roundtrip was something like $400. I remember the recruiter being surprised at the cost.
I've attempted to take the Greyhound four times. Twice they gave me the wrong address to take it from. Once from Irvine, where I was told to go to a new transit center that wasn't open yet. The other time from San Luis Obispo, which has no actual transit center, although it has a train station, and county buses leave from city hall. I was told the Greyhound left from a Texaco station right off the highway. I made it out of Irvine the same day (hitchhiked), but I was a day late out of San Luis Obispo (took the train), and had to reschedule a connecting flight.
I can easily believe that trains are more spacious and the seating is more insulated from odors.
But my experience riding trains is that they sway/shake obnoxiously while going around curves. Where as busses like Megabus are a very smooth ride the entire way.
> room to walk around and stretch your legs
My experience with busses is that there are somewhat frequent stops where people are encouraged to walk and use the bathroom. Which isn't a complete fix, but it does serve to substantially mitigate those problems.
In my limited experience with Greyhound, the busses are notoriously late. I’ve taken it only 2 times. The first time was around an hour late and the second was running about 5-6 hours late.
Tangential, but does anybody else get real npm vibes from the rust ecosystem?
Something about the “every productive project depends on this one external package” situation really makes me uneasy. And there are language features like async that can’t even really be used without going to crates.io for a bunch of stuff that really ought to be in the stdlib.
withoutboats mentioned trying to get something like https://github.com/zesterer/pollster into the standard library. I think that's a great idea. They don't want to put an executor into the standard library because that would "pick a winner" before the ideas are settled. But pollster allows async crates to be useful to sync code, is obviously not a "winner" and its inclusion in the standard library would force crates to be properly executor-agnostic.
The problem with this is that pollster doesn't, by itself, provide a signalling mechanism by which bottom-level futures (such as OS-provided IO primitives) can signal the executor. It isn't a "light-weight replacement" for tokio, tokio does a lot of stuff that pollster simply can't because tokio is a reactor + a runtime, whereas pollster is just a runtime. [This page](https://rust-lang.github.io/async-book/08_ecosystem/00_chapt...) gives some nice details about the difference, and hopefully some sense of the sheer difficult that comes with trying to integrate a default reactor into `std` without it falling well short when it comes to extensibility.
npm, being one of the most successful package management systems in history, was an express inspiration for Cargo, sure. It’s also not without flaw, and so Cargo and npm do differ in some key ways.
Software engineers love to talk about how code re-use is good, and keeping code small and simple is good, and then somehow get upset when a lot of modular, small, reusable code is produced and widely shared.
I think that's where the issue lies. It's either so small it would take 5 seconds to actually type or it is neither modular nor small... Very rarely is is one of those things never mind both of them at the same time.
And sadly it shows, given the amount of crates some projects depend on, plus having to wait for the same crate to be re-compiled multiple times, due to how it is referenced across the whole dependency graph.
That's true, but grandparent's advice is a good way to pick standardized parts. If there are 4 versions of a switch with identical footprint on digikey, you can be pretty sure that you'll also be able to source it from Taobao or LCSC or something.
I use pexpect frequently for automating serial-port communications. It’s very useful for implementing test/flashing automation that needs some kind of a serial-port step (such as asking u-boot to run a few commands).
I do something very similar, how do you deal with random serial port data loss (dropped bytes)? Maybe it is specific to my environment, but I get mangled commands about 1% of time, which really sucks for automated workflows. I mostly countered this by writing commands to a tmp file, verifying md5sum and then executing the file on the device side.
Serial is slow but typically very reliable. If you’re seeing random dropped bytes, I’d take a look at your wiring and PC interface UART which might be low quality. FTDI UARTs in particular are often counterfeit (the real ones are good).
Make is designed to take a bunch of little shell scripts, give each one an arbitrary name (which can be the output files if you want, but doesn't have to be), and run them. Dependencies are run first, if your script has any of them. Files can satisfy dependencies unless you tell Make that they don't.
It's really not different from a shell script with a bunch of functions that you can call by name, except that Make has already provided the scaffolding for you (including dependency-awareness, tree walking, parallel execution, etc)
Your money comes from high-volume licensing deals, where you presumably get a per-unit license fee. That revenue stream is jeopardized by pissing off the CI/CD teams that have to support QNX product lines.
reply