I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.
Is there a difference between global setattr(object, v) and object.__setattr__(v)? I've seen setattr() in the wild all over but I've never encountered the dunder one.
Note that `object` here is not a placeholder variable but actually refers to the global object type (basically a superclass of pretty much every other type in Python). It allows you to bypass the classes’ __setattr__ and set the value regardless (the setattr() function can’t do that):
In [1]: from dataclasses import dataclass
In [2]: @dataclass(frozen=True)
...: class Foo:
...: a: int
...:
In [3]: foo = Foo(5)
In [4]: foo.a = 10
FrozenInstanceError: cannot assign to field 'a'
In [5]: setattr(foo, "a", 10)
FrozenInstanceError: cannot assign to field 'a'
In [6]: object.__setattr__(foo, "a", 10)
In [7]: foo.a
Out[7]: 10
Have you tried using chatgpt/etc as a starting point when you're unfamiliar with something? That's where it really excels for me, I can go crazy fast from 0 to ~30 (if we call 60 mvp). For example, the other day I was trying to stream some pcm audio using webaudio and it spit out a mostly functional prototype for me in a few minutes of trying. For me to read through msdn and get to that point would've taken an hour or two, and going from the crappy prototype as a starting point to read up on webaudio let me get an mvp in ~15 mins. I rarely touch frontend web code so for me these tools are super helpful.
On the other hand, I find it just wastes my time in more typical tasks like implementing business logic in a familiar language cause it makes up stdlib apis too often.
This is about the only use case I found it helpful for - saving me time in research, not in coding.
I needed to compare compression ratios of a certain text in a language, and it actually came up with something nice and almost workable. It didn't compile but I forgot why now, I just remember it needing a small tweak. That saved me having to track down the libraries, their APIs, etc.
However, when it comes to actually doing data structures or logic, I find it quicker to just do it myself than to type out what I want to do, and double check its work.
Coil whine (or capacitor whine) from the gpu running at too high a refresh rate. Easiest thing would be to use nvidia control panel to add an fps cap to something like 2x your monitors max rate for the browser (or globally). It's pretty common with any workload after like 600 fps.
I think it just predicts 2 branches per cycle instead of 1. So it can evaluate the result of n+2 ahead of time instead of only n+1 (typical branch prediction). How this works without wrecking the L1 cache, I'm not sure. It seems like the lookahead past n+1 would make cache evictions much more likely, so maybe I'm missing something here.
> Zen 5 can look farther forward in the instruction stream beyond the 2nd taken branch and as a result Zen 5 can have 3 prediction windows where all 3 windows are useful in producing instructions for decoding.
> It seems like the lookahead past n+1 would make cache evictions much more likely, so maybe I'm missing something here.
The frontend is already predicting dozens of branches ahead of what the backend can actually confirm. Looking ahead by one extra branch ahead doesn't really hurt.
Also, modern TAGE branch predictors are scary accurate, well above 99% on most code (including unpredictable indirect jumps). Besides, the majority of branch prediction targets are already in the L1 cache, it only predicts them because it saw them recently.
The branch predictor in Apple's M1 actually takes advantage of the latter fact. It doesn't predict what address to fetch next, it predicts which cacheline in L1 holds the target. So you only actually get branch predictions for targets in L1.
I didn't even notice that advantage. I was just thinking about how it minimised each branch target entry to about 16 bits.
I suspect Apple are also using it as a way predictor. If the BTB points directly to the correct cacheline and each cacheline points to the next way then the only time you need to do a search is on a branch misspredit.
What do you mean by "relatively universal"? This is Cuda only [0] with a promise of a rocm backend eventually. There's only one project I'm aware of that seriously tries to address the Cuda issue in ml [1].
What do you mean by "for anything serious it isn't open source"? I didn't see any red flags in the apache variant of timescale, just constant pleading to try their hosted option.
And as I understand that license, you are allowed to use Timescale for anything that doesn’t involve offering Timescale itself as a service. If you were using Timescale to process lots of time series transactions in your backend, it doesn’t seem to me like that would break the license.
(Which is to say that if, like Tembo, you’re offering Postgres as a service you do indeed have a problem. But for other use, should be fine)
The tricky thing with these licenses (BSL, SSPL, etc.) is that you can use them freely for internal stuff, but suddenly, if you make your product public (assuming it uses, e.g., TimescaleDB), things can get muddy. Everyone wants the flexibility to either open-source or commercialize a successful internal product in the future.
The problem is that, even if your app is not a mere frontend for TimescaleDB/Mongo/Redis, you can get sued, and you'll have to spend unnecessary time and money proving things in court. No one wants this, especially a startup owner whose money and time are tight. Also, even if your startup/company uses some of these techs, potential company buyers will be very wary of the purchase if they know they'll have to deal with this later.
I would assume TimescaleDb only sues if you money. In this case you can also afford a commercial license. If you hit big just contact them and tell there was a problem having a correct license earlier and you want to fix the situation.
There is 0% chance Timescale would sue mom’n’pop operation for breaking their license.
Your assumptions are based on good faith, and businesses are not run on that :) Imagine Oracle or IBM bought Timescale; that 0% chance suddenly increases dramatically.
I imagine legally would need a lawsuit to set a precedence, and if a license owner sets an over-reaching precedence of what a wrapper is, they risk losing customer trust and companies avoiding them like the plague.
e.g. timescaledb going after a tsdb as a service company offering tsdb behind a graphql wrapper vs timescaledb going after a financial company offering timeseries data collection and viewing.
I think a good border test would be, would timescaledb allow you to offer a metrics and logging service? technically you're offering timeseries database functionality, but it's in a constrained domain, and very clearly a different product, but still effectively CRUDing timeseries data.
That’s the internal use restriction. There is also the restriction more relevant to the use cases I’m talking about on Value Added Products which is “the customer is prohibited, either contractually or technically, from defining, redefining, or modifying the database schema or other structural aspects of database objects”.
Which is, basically, saying that you can do anything that doesn’t give your customers the ability to redefine and modify the database schema as long as you are creating a product that is adding value on top of timescale. Is any of this 100% clear? Not any more that legalese generally is, and of course probably wise to talk to a lawyer if you’re concerned about it. Timescale has made their intent with the license clear in the past with blog posts and such though.
Does llvm have good riscv support for cross compiling? In my experience, even aarch64 isn't universally supported as a cross compiling target and building on embedded devices is an exercise in patience. I never even considered trying riscv chips in production since I assumed libs would be a hassle.
Even on rpi4, it's near impossible for a normal person to use the gpu because videocore6 is so locked down that you get zero lib support or documentation. And I'm pretty sure the only assembler is still from reverse engineering instead of the manfr.
reply