At this point, we have to assume anything that becomes a published benchmark is specifically targeted during training. That's not something specific to LLMs or OpenAI. Compiler companies have done the same thing for decades, specifically detecting common benchmark programs and inserting hand-crafted optimizations. Similarly, the shader compilers in GPU drivers have special cases for common games and benchmarks.
Apples and oranges. VW actually cheated on regulatory testing to bypass legal requirements. So to be comparable, the government would first need to pass laws where e.g. only compilers that pass a certain benchmark are allowed to be used for purchasable products and then the developers would need to manipulate behaviour during those benchmarks.
There's a sliding scale of badness here. The emissions cheating (it wasn't just VW, incidentally; they were just the first uncovered. Fiat-Chrysler, Mercedes, GM and BMW were also caught doing it, with suspicions about others) was straight-up fraud.
It used to be common for graphics drivers to outright cheat on benchmarks (the actual image produced would not be the same as it would have been if a benchmark had not been detected); this was arguably, fraud.
It used to be common for mobile phone manufacturers to allow the SoC to operate in a thermal mode that was never available to real users when it detected a benchmark was being used. This is still, IMO, kinda fraud-y.
Optimisation for common benchmark cases where the thing still actually _works_, and where the optimisation is available to normal users where applicable, is less egregious, though, still, IMO, Not Great.
I think breaking a law is more unethical than not breaking a law.
Also, legality isn't the only difference in the VW case. With VW, they had a "good emissions" mode. They enabled the good emissions mode during the test, but disabled it during regular driving. It would have worked during regular driving, but they disabled it during regular driving. With compilers, there's no "good performance" mode that would work during regular usage that they're disabling during regular usage.
> I think breaking a law is more unethical than not breaking a law.
It sounds like a mismatch of definition, but I doubt you're ambivalent about a behavior right until the moment it becomes illegal, after which you think it unethical. Law is the codification and enforcement of a social contract, not the creation of it.
But following the law is itself a load bearing aspect of the social contract. Violating building codes, for example, might not cause immediate harm if it's competent but unusual, yet it's important that people follow it just because you don't want arbitrariness in matters of safety. The objective ruleset itself is a value beyond the rules themselves, if the rules are sensible and in accordance with deeper values, which of course they sometimes aren't, in which case we value civil disobedience and activism.
Also, while laws ideally are inspired by an ethical social contract, the codification proces is long, complex and far from perfect. And then for rules concerning permissible behavior even in the best of cases, it's enforced extremely sparingly simply because it's not possible nor desirable to detect and deal with all infractions. Nor is it applied blindly and equally. As actually applied, a law is definitely not even close to some ethical ideal; sometimes it's outright opposed to it, even.
Law and ethics are barely related, in practice.
For example in the vehicle emissions context, it's worth noting that even well before VW was caught the actions of likely all carmakers affected by the regulations (not necessarily to the same extent) were clearly unethical. The rules had been subject to intense clearly unethical lobbying for years, and so even the legal lab results bore little resemblance to practical on-the-road results though systematic (yet legal) abuse. I wouldn't be surprised to learn that even what was measured intentionally diverged from what is harmfully in a profitable way. It's a good thing VW was made an example of - but clearly it's not like that resolved the general problem of harmful vehicle emissions. Optimistically, it might have signaled to the rest of the industry and VW in particular to stretch the rules less in the future.
>I doubt you're ambivalent about a behavior right until the moment it becomes illegal, after which you think it unethical.
There are many cases where I think that. Examples:
* Underage drinking. If it's legal for someone to drink, I think it's in general ethical. If it's illegal, I think it's in general unethical.
* Tax avoidance strategies. If the IRS says a strategy is allowed, I think it's ethical. If the IRS says a strategy is not allowed, I think it's unethical.
* Right on red. If the government says right on red is allowed, I think it's ethical. If the government (e.g. NYC) says right on red is not allowed, I think it's unethical.
The VW case was emissions regulations. I think they have an ethical obligation to obey emissions regulations. In the absence of regulations, it's not an obvious ethical problem to prioritize fuel efficiency instead of emissions (that's I believe what VW was doing).
Drinking and right turns are unethical if they’re negligent. They’re not unethical if they’re not negligent. The government is trying to reduce negligence by enacting preventative measures to stop ALL right turns and ALL drinking in certain contexts that are more likely to yield negligence, or where the negligence world be particularly harmful, but that doesn’t change whether or not the behavior itself is negligent.
You might consider disregarding the government’s preventative measures unethical, and doing those things might be the way someone disregards the governments protective guidelines, but that doesn’t make those actions unethical any more than governments explicitly legalizing something makes it ethical.
To use a clearer example, the ethicality of abortion— regardless of what you think of it— is not changed by its legal status. You might consider violating the law unethical, so breaking abortion laws would constitute the same ethical violation as underage drinking, but those laws don’t change the ethics of abortion itself. People who consider it unethical still consider it unethical where it’s legal, and those that consider it ethical still consider it ethical where it’s not legal.
It's not so simple. An analogy is the Rust formatter that has no options so everyone just uses the same style. It's minimally "unethical" to use idiosyncratic Rust style just because it goes against the convention so people will wonder why you're so special, etc.
If the rules themselves are bad and go against deeper morality, then it's a different situation; violating laws out of civil disobedience, emergent need, or with a principled stance is different from wanton, arbitrary, selfish cheating.
If a law is particularly unjust, violating the law might itself be virtuous. If the law is adequate and sensible, violating it is usually wrong even if the violating action could be legal in another sensible jurisdiction.
the right on red example is interesting because in that case, the law changes how other drivers and pedestrians will behave in ways that make it pretty much always unsafe
That just changes the parameters of negligence. On a country road in the middle of a bunch of farm land where you can see for miles, it doesn’t change a thing.
> but that doesn’t make those actions unethical any more than governments explicitly legalizing something makes it ethical
That is, sometimes, sufficient.
If government says ‘seller of a house must disclose issues’ then I rely rely on the law being followed, if you sell and leave the country, you have defrauded me.
However if I live in a ‘buyer beware’ jurisdiction, then I know I cannot trust the seller and I hire a surveyor and take insurance.
There is a degree of setting expectations- if there is a rule, even if it’s a terrible rule, I as individual can at least take some countermeasures.
You can’t take countermeasures against all forms of illegal behaviour, because there is infinite number of them. And a truly insane person is unpredictable at all.
Ethics are only morality if you spend your entire time in human social contexts. Otherwise morality is a bit larger, and ethics are a special case of group recognized good and bad behaviors.
What if I make sure to have a drink once a week for the summer with my 18 year old before they go to college because I want them to understand what it's like before they go binge with friends? Is that not ethical?
Speeding to the hospital in an emergency? Lying to Nazis to save a Jew?
Law and ethics are more correlated than some are saying here, but the map is not the territory, and it never will be.
unless following an unethical law would in itself be unethical, then breaking the unethical law would be the only ethical choice. In this case cheating emissions, which I see as unethical, but also advantageous for the consumer, should have been done openly if VW saw following the law as unethical. Ethics and morality are subjective to understanding, and law only a crude approximation of divinity. Though I would argue that each person on the earth through a shared common experience has a rough and general idea of right from wrong...though I'm not always certain they pay attention to it.
I disagree- presumably if an algorithm or hardware is optimized for a certain class of problem it really is good at it and always will be- which is still useful if you are actually using it for that. It’s just “studying for the test”- something I would expect to happen even if it is a bit misleading.
VW cheated such that the low emissions were only active during the test- it’s not that it was optimized for low emissions under the conditions they test for, but that you could not get those low emissions under any conditions in the real world. That's "cheating on the test" not "studying for the test."
How so? VW intentionally changed the operation of the vehicle so that its emissions met the test requirements during the test and then went back to typical operation conditions afterwards.
VW was breaking the law in a way that harmed society but arguably helped the individual driver of the VW car, who gets better performance yet still passes the emissions test.
That is not true. Even ChatGPT understands how they are different, I won’t paste the whole response but here are the differences it highlights:
Key differences:
1. Intent and harm:
• VW’s actions directly violated laws and had environmental and health consequences. Optimizing LLMs for chess benchmarks, while arguably misleading, doesn’t have immediate real-world harms.
2. Scope: Chess-specific optimization is generally a transparent choice within AI research. It’s not a hidden “defeat device” but rather an explicit design goal.
3. Broader impact: LLMs fine-tuned for benchmarks often still retain general-purpose capabilities. They aren’t necessarily “broken” outside chess, whereas VW cars fundamentally failed to meet emissions standards.
Tesla cheats by using electric motors and deferring emissions standards to somebody else :D Wait, I really think that's a good thing, but once Hulk Hogan is confirmed administrator of the EPA, he might actually use this argument against Teslas and other electric vehicles.
This is the apples to apples version. Perhaps might be more accurate to say that when detecting a benchmark attempt the model tries the prompt 3 times with different seeds then picks the best answer, otherwise it just zero-shots the prompt in everyday use.
I say this because the be test still uses the same hardware (model) but changed the way it behaved by running emissions friendly parameters ( a different execution framework) that wouldn’t have been used in everyday driving, where fuel efficiency and performance optimized parameters were used instead.
What I’d like to know is if it actually was unethical or not. The overall carbon footprint of the lower fuel consumption setting, with fuel manufacturing and distribution factored in, might easily have been more impactful than the emissions model, which typically does not factor in fuel consumed.
Most of the time these days compiler writers are not cheating like VW did. In the 1980s compiler writers would insert code to recognize performance tests and then cheat - output values hard coded into the compiler instead of running the algorithm. Which is the type of thing that VW got in trouble for.
These days most compilers are trying to make the general case of code fast and they rarely look for benchmarks. I won't say they never do this - just that it is much less common - if only because magazine reviews/benchmarks are not nearly as important as they used to be and so the incentive is gone.
I think the OP's point is that chat GPT-3.5 may have a chess-engine baked-in to its (closed and unavailable) code for PR purposes. So it "realizes" that "hey, I'm playing a game of chess" and then, rather than doing whatever it normally does, it just acts as a front-end for a quite good chess-engine.
Not quite. VW got in trouble for running _different_ software in test vs prod. These optimizations are all going to "prod" but are only useful for specific targets (a specific game in this case).
> VW got in trouble for running _different_ software in test vs prod.
Not quite. They programmed their "prod" software to recognise the circumstances of a laboratory test and behave differently. Namely during laboratory emissions testing they would activate emission control features they would not activate otherwise.
The software was the same they flash on production cars. They were production cars. You could take a random car from a random dealership and it would have done the same trickery in the lab.
I disagree with your distinction on the environments but understand your argument. Production for VM to me is "on the road when a customer is using your product as intended". Using the same artifact for those different environments isn't the same as "running that in production".
“Test” environment is the domain of prototype cars driving at the proving ground. It is an internal affair, only for employees and contractors. The software is compiled on some engineer’s laptop and uploaded on the ECU by an engineer manually. No two cars are ever the same, everything is in flux. The number of cars are small.
“Production” is a factory line producing cars. The software is uploaded on the ECUs by some factory machine automatically. Each car are exactly the same, with the exact same software version on thousands and thousands of cars. The cars are sold to customers.
Some small number of these prodiction cars are sent for regulatory compliance checks to third parties. But those cars won’t become suddenly non-production cars just because someone sticks up a probe in their exhausts. The same way gmail’s production servers don’t suddenly turn into test environments just because a user opens the network tab in their browser’s dev tool to see what kind of requests fly on the wire.
Only because what VW did is illegal, was super large scale, and could be linked to a lot of indirect deaths through the additional pollution.
Benchmark optimizations are slightly embarrassing at worst, and an "optimization for a specific use case" at best. There's no regulation against optimizing for a particular task, everyone does it all the time, in some cases it's just not communicated transparently.
Phone manufacturers were caught "optimizing" for benchmarks again and again, removing power limits to boost scores. Hard to name an example without searching the net because it's at most a faux pas.
It’s supposedly a “fallacy” and “arbitrary” and yet, 26 years after this prediction, the growing languages right now are Rust (compiled, statically typed systems language) and Python (dynamically typed glue code written by non-CS mathematics people). Maybe the distinction is not so arbitrary after all.
Though Python compiles to bytecode (pyc) and recently added optional typing.
On the other side, Rust can be compiles to WASM, which can be considered a form of bytecode.
If you look at Clojure (app lang w/o types, very dynamic) or Elm (strongly typed app lang that transpiles to a scripting lang), you'll find the situation today is even more whimsical than Ousterhout could have predicted.
There is "Is identical", "looks identical" and "has lost sufficient detail to clearly not be the original." - being able to differentiate between these three states is useful.
Importantly the first one is parameterless, but the second and third are parameterized by the audience. For example humans don't see colour very well, some animals have much better colour gamut, while some can't distinguish colour at all.
Calling one of them "perceptually lossless" is cheating, to the disadvantage of algorithms that honestly advertise themselves as lossy while still achieving "looks identical" compression.
It's a well established term, though. It's been used in academic works for a long time (since at least 1970), and it's basically another term for the notion of "transparency" as it relates to data compression.
I honestly don't notice this anymore. Advertisers have been using such language since time immemorial, to the point it's pretty much a rule that an adjective with a qualifier means "not actually ${adjective}, but kind of like it in ${specific circumstances}". So "perceptually lossless" just means "not actually lossless, except you couldn't tell it from truly lossless just by looking".
It is in no way the definition of lossy. It is a subset of lossy. Most lossy image/video compression has visible artifacting, putting it outside the subset.
> Returning to an alternate address shouldn’t be significantly more expensive than returning to the default address, so this has to be cheap.
Modern CPUs add complications to arguments like this. Branches stall the execution pipeline, so branch prediction was invented to keep the pipeline flowing. Return instructions are perfectly predicted, which makes them literally free. At the very least, any alternate return scheme has to pay for a full misprediction. That can be expensive.
This doesn't really make sense. The branch predictor relies on a history of previous executions of that branch, or on explicit hints, to decide if a branch will be taken or not. Based on this prediction, the speculative execution hardware then sees the jump (return/panic) and loads the code from that address into the icache. There is 0 difference between `if (condition) jump $panic_recover_address` and `if (condition) jump $function_return_address` in terms of how easy or hard it is to predict or speculatively load based on the prediction.
Not that you’re wrong, but Returns aren’t predicted using the branch predictor, but with the RSB (return stack buffer) which stores the return addresses of the current call stack. The x86 optimization manual (starting quite a few years ago) explicitly mentions calls and rets should match for best performance.
On x86, ret and call are explicit instructions. Ret always predicts the address of the last call, which is (usually) 100% accurate. Your example of `if (condition) jump $panic_recover_address` contains two branches, either of which can be mispredicted.
Intel (and most CPUs really) have had a "shadow" call stack for ret+call prediction (the Return Stack Buffer) decades before they had the control-flow integrity shadow stack. It is possible that the two stacks are now unified, but I'm not 100% sure.
The RSB has 10 or so entries and it is in the critical path, while the integrity stack might be larger and have less strict performance characteristics, so they might be separate objects.
Yes it can, prediction begins before decode. In general, literally any instruction can be mispredicted, even if it isn't a branch at all, even if there isn't even a instruction there (on x86 where instructions are variable length).
For the uninitiated, branch prediction roughly works like this:
The CPU fetches instructions from memory well ahead of actually decoding what the instructions are. In case of variable length instruction sets such as x86 that also means the cpu has no ability to "peek ahead" in the instruction stream to find out if there is a branch.
But don't despair, there is a trick:
Each instruction (obviously) has an address. So if you had an associative memory (think of it as a hash map) that stored a pair of (address of a branch; target address) then you can consult this memory while you're fetching instructions to feed to the decoder stage of the pipeline.
When the address of the next instruction you're about to fetch is found in that associative memory you get the address of where the rest of the instruction stream lives. I.e. instead of fetching the next word sequentially you continue fetching from whatever address was found by the lookup.
Now, when you actually end up executing the instructions it may turn out that the target address suggested by that lookup memory was wrong. In that case you just flush the pipeline and start fetching again from the actual target (and you update the branch predictor associative memory).
This basic model works for conditional branches, unconditional and indirect branches too but in practice there are more tricks. Some indirect jumps like returns exploit the natural call/ret pairing as described elsewhere in this thread. Conditional branch entries may contain an extra bit taken/not-taken etc.
But the main idea is more or less this.
To "mispredict" an unconditional jump for example all it takes is to modify the code so that the instruction points to a different target.
If the branch predictor target address still points to the old target, that address will be prefetchec and a stall will be caused. No big deal in practice.
Jumping to a destination via pointer that changed value is a misprediction of an indirect jump and that's common.
More uncommon but technically possible is to mispredict a unconditional direct jump.
For that to happen the code itself has to change.
Indeed JIT is a common cause of mutable code at runtime.
But also unmapping a library and remapping another library in the same memory range can also effectively cause the same address to contain a different instruction that the one predicted but the branch prediction logic (likely not even a branch instruction)
This is a good point that I hadn't considered - these are indirect jumps, and the return instruction has special handling from the processor to compute the address specifically, which a jump to a recover address can't have.
I thought the Branch Target Predictor on x64 was global, not local, and it has to kick in before decode so even direct branches can be mispredicted. Branch prediction is 2 parts - the conditional predictor and the target predictor. The conditional predictor is actually per 64 byte instruction block (so if you have a few branches consecutively they share branch predictor entries and can step on each other. the target predictor uses a global history and needs to happen very early to keep the front end fed.
Predicting the branch predictor is extremely difficult because it is complex afaik, it is best to test.
All of the interaction between a million caches, predictor, instruction parallelism, different cpus, different code etc. feels like it is impossible to reason about it
Modern CPUs have a "return address stack" which basically mirrors the real stack and allows them to perfectly predict returns (for normal code anyway).
First explanation I found on Google. Haven't read it:
returns, conditional jumps and indirect jumps have each fairly different prediction profiles. In particular paired call+ret are predicted perfectly at least until the dedicated internal stack doesn't overflow; indirect jumps are, as a general rule, less predictable than conditional jumps as more state (the jump target) need to stored.
But in the given context, returning a Return type will almost by necessity involve a conditional at the caller site, so for an apples to apples comparison that should be compared, not a linear return and nothing else.
I think quantization is a red herring. If there's any way to undo the unlearning, this means that the knowledge is still in the weights -- that's basic information theory. I'm sure there are a million other ways to recover the lost knowledge that don't involve quantization.
Okay, but narrowly focusing on a "quantization-robust unlearning strategy" as per the abstract might be a red herring, if that strategy doesn't incidentally also address other ways to undo the unlearning.
I think it's useful because many people consume quantized models (most models that fit in your laptop will be quantized and not because people want to uncensor or un-unlearn anything). If you're training a model it makes sense to make the unlearning at least robust to this very common procedure.
This reminds of this very interesting paper [1] that finds that it's fairly "easy" to uncensor a model (modify it's refusal thingy)
Yeah, exactly this. You would really want to pursue orthogonal methods for robust unlearning, so that you can still use quantization to check that the other methods worked.
That’s like saying that encryption is a red herring. Yes, the information is there, but recovering it is a different matter. In this case, quantisation allows you to recover the information without knowing the “cypher” used to “forget” it - that’s the important distinction.
If there is any way to undo the unlearning, there is also a way to use that method to identify the weights carrying the information to stop them from conveying that information. At the heart of training is detection.
The information may still be in there, but undetectable by any known means. You can definitely certainly remove the information, setting every weight in the model to zero will do that. Identifying when you have achieved the goal of completely removing information while not destroying other information might not be possible.
I'm not sure if that will mean there might in the future be something analogous to zero-day unlearning reversal exploits.
It's not a bad question, these units of measurements are always a bit confusing. You can similarly ask why for humans, rubbing a balloon is harmless, although that builds up 30 kV of static electricity, while touching a 230 V power socket can kill you.
Voltage is merely the "pressure" that charged particles experience. Voltage alone tells you nothing about how much charge is actually available once electricity is allowed to flow. And that's where the harm comes from. For static electricity, when you touch something, you get maybe a microcoulomb, once, and it's gone. For a power socket, you get up to 16 coulombs per second continuously.
But can e.g. 3V DC kill? Perhaps by using the body's resistance, but I have the idea that the effect would be different from say 220V AC, which affects the nerves.
Not generally, remember Ohm's Law I = V/R. Internally the body has a resistance of ~300 Ohms as a rough rule while our skin is 1000-10000 depending on the condition and contact area involved.
So 3V isn't going to pose any real risk unless it's applied internally and right across a critical nerve leading to your heart or a muscle directly on the heart. For reference pacemakers are generally set to 2-3 volts. Applied externally up to ~12V is generally considered low enough voltage there's a low risk of truly adverse effects.
10 milliamps across your heart can kill but using Ohm's law we can calculate 3V / 0.010 A to get a resistance of 300 ohms. This means you're probably still going to have a bad time if you apply it directly across your heart during open-heart surgery but other than that 3 volts just isn't enough to drive a lethal current through your skin.
Which is why if caught in a lightning storm you should crouch with feet together and why I try very hard to only use one hand when doing something that might have the potential to shock me.
Depends how it's applied and what's sourcing it. 3v does basically nothing to dry skin, but would be quite bad on wires implanted in your chest across your heart.
thats all theory
thing is that I mess with large two volt(nominal)
storage cells,the largest are over 250lbs and sit
like dumb beasts,waiting to oblige anyones low voltage requests,hundreds of amps on tap
be nothing to bolt ,some nice shiny copper handles to the terminals and mist them down with some warm salt water
I also mess around with microscopes,and compared to bugs,humans are very poorly made,so many tiny
things are flawless living perfection,and some like wolf(jumping) spiders are smart,smart enough
that they see us seeing them,and are ok with that
one thing that I have observed that plays into the
static electricity thing,is that many of the tiny
critters that I watch,are impecably clean,no dust
or dirt on them at all,perfectly clean,unlike a human finger,which is one zillabutt uggly thing,under magnification
I'm not informed enough to rebut this, and don't want to be quoted in the follow-up article that suggests HN is still too dumb to get the genius of Ballmer, but here's my take.
It's only the footnote of the article that mentions Ballmer's "stage persona". I think that's the important point, and I would add that his "interview persona" might have been even worse. Back then, he was quoted as saying insanely dumb shit all the time. Like when he literally publicly laughed about the iPhone. Or when he called a Zune feature to share files between devices "squirting".
Maybe he did make all kinds of brilliant decisions internally. I wouldn't know, but neither would the stock market. If the CEO comes across as not understanding tech, it's likely the market will price that in.
I think a better way of understanding Ballmer is that he really struggled to relate to end consumers, but he understood their business partners very well.
He was very much an '80s/'90s exec. Back then, execs were not "visionaire rockstars"; they were wisecracking boomers in suits. Microsoft walked a very thin line between very different worlds, and Ballmer was closer to the old-world type than the new.
I've been involved with computer networking since approximately 1982, and I've never once heard someone use "squirts" outside talking about Zunes. I don't doubt that it was jargon inside very specific niches, but it has never been common elsewhere.
Before the Ballmer/Zune use of the term I remember my father talking about data being squirted to A2A missiles (he was military) prior to launch, so perhaps that is one of the niches.
I take that back. An old issue of Wired had a jargon watch mention of “squirt the bird” as bouncing something off a satellite, which I remembered only because they misspelled it and I wondered what “quirt” meant.
So yeah, maybe that’s a military or adjacent thing.
Showing a picture of Notre Dame photoshopped against unsettling clouds to make a point about the psychological effect of its architecture is borderline fraud. Any actual photo makes the building look a lot more majestic rather than scary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Notre-Da...
Also, I wonder to what extent this is an American perspective. Of course, American culture is omnipresent in Europe, so the association of Gothic buildings with horror movies has been hammered into our minds as well. But still, I don't think any European would look at Cologne Cathedral and be reminded of Ghostbusters of all things. I think unfamiliarity plays a role here.
Where's your evidence it's photoshopped? It's credited to "Pete Douglass/Getty Images" and Getty has a policy against photoshopped images.
It's just a photo on a day and time with particularly dramatic clouds. There's no "borderline fraud" here.
And of course it does have a lot to do with weather and lighting. Gothic horror is set in these environments at dusk and at night, in moonlight and in storms. Gothic horror doesn't generally utilize bright sunny days, so your photo isn't helping to illustrate the concept.
A building can be simultaneously majestic and inspiring during a warm sunny day, and become spooky and creepy in low light amidst the fog and cold damp.
My first thought when I read the article was that that image must have been run through something like a contrast-limited adaptive histogram equalization (CLAHE) process.
Even if I agree that Gothic architecture is the most appropriate setting for horror action, and I also agree with many of the arguments of the Italian Renaissance against what they have called as "Gothic", I still consider the great Gothic cathedrals as the most beautiful buildings that have ever been built.
the Duomo is a weird kind of gothic, most notably missing the tall proportions of most gothic cathedrals. I've never seen it described as scary, but it has its creepy details, like the statue depicting San Bartholomew after being skinned, wearing his own skin.
As non-English speaker I do not consider Gothic architecture to be spooky. I saw Ghostbusters long time ago and have just vague memories. House in Adam's Family and similar revivals are not really Gothics for me.
On the other hand there is not much Gothics left except for few cathedrals. Everything has been reconstructed in Baroque here.
You’d probably be surprised how often people have to create config files programmatically. With properly specified formats, you can serialize the configuration, which means nesting and string escapes are taken care of automatically. With half-baked custom formats, you have to resort to string replacement and praying.
Having said that, there’s an official RFC for CSV, and INI files are de-facto specified by Microsoft’s implementation.
Yeah, half baked anything is going to screw you over. INI files are certainly programmatically serialisable, otherwise they wouldn't exist. It does move the datatypes to the program rather than being encoded in the config though, which adds to program overhead. Horses for courses though, with greater flexibility comes greater potential to f*uk it up.
reply