> You make sure to use a Japanese specific font (not just a CJK one) if the language setting is JA.
What "language setting", and how do you check it? Do your testers know that they have to test this Japan-specific thing, and how to even tell whether it's working or not? And what about users who need to read Japanese, but don't want their UI to be in Japanese - or, worse, users who need to read both Japanese and Chinese?
You have to render strings in (Chinese|Japanese) font if you believe the string to be meant to be (Chinese|Japanese). Literally that. That's the official Consortium sanctioned way to handle Chinese/Japanese/Korean characters.
There are no particularly good ways or ready made frameworks for that, as it wasn't a huge issue pre-Internet because most people are monolingual in these languages: you pick a language in OS(or buy computers with a ROM) and everything user would see was in the user's language.
It's a giant pain today - there's no "Arial in Chinese", no easy way to mesh multiple fonts together in UI, or good ways to determine intended language of a string, and the fallback default is least common denominator of Simplified Chinese(PRC) for some reason - but not much is being done on those fronts.
I realized I can add a bit more here in hope it'll be useful to someone at some point, all [citation needed]:
It seems that there are Chinese regulation(?) that require Chinese text to be _always_ displayed in _appropriate_ font, which causes "wrong font" problem to Japanese users, while there are no such equivalent requirements elsewhere that computer software face no hard dilemma to follow two regulations in same code point; one is law and one is just unanimous customer complaints.
The rationale for that Chinese regulation(?), I think, is that Chinese computer users long had an exaggerated version of the same problem due to how Kanji was adopted to Unicode; the Kanji map was created by first enumerating common-use Japanese Kanji from reasonable existing tables, then merging additional Chinese common use characters not found on Japanese texts which are plenty. Duplicates were merged on loose and pragmatic-at-time judgements, some by shapes, some by meanings(!), some left as duplicates.
It seems to me that this had lead to a situation that lasted at least some period of time, that Chinese UTF-8 strings on a computer displayed in a mishmash of Japanese and Chinese fonts, Chinese one filling the gaps of Japanese rather than the other way around, which was frustrating(imagine top 10 most used characters of alphabets in Comic Sans and rest Arial), and solved by that regulation(the regulatory setup is a Chinese national GB or GB/T standard and enforcement of "applicable industrial standards", I believe).
There are such attempted solutions to this problem of same-system Chinese-Japanese text coexistence, such as registering all the Japanese Kanji as first choices on Unicode IVS map along legitimate Japanese variants, such that each Japanese characters with the IVS suffix sequences per each characters would be in Japanese form. Obviously there's no such font that this is going to work well with, and it's also unnecessary bloat and just a committee backstage influencing war.
The moral of the story is, the "it's all Kanji after all" approach never worked, simultaneous Chinese-and-Japanese support in Unicode and Unicode-based apps is a mess, and it has to be fixed higher up at some point in the future.
I don't disagree. The only practical strategy now is that if you know the text/user is Japanese, use a specifically Japanese font (Windows and MacOS have a few built in options), not just one that has the CJK codepoints.
> For example on an android phone, set language to Japanese and all kanji is in a Japanese font by default.
If it's an app that uses the native UI toolkit, sure. If it's using one of the package-it-up frameworks, you'd better hope the developer configured it correctly.
> For more niche uses you can usually set the font or language on a per app basis.
You very often can't, or it's impractically difficult for regular users. Try changing your locale but not your language and watch how many programs screw it up.
> You make sure to use a Japanese specific font (not just a CJK one) if the language setting is JA. It's not that hard...
I need to use Japanese language in a setting outside of the local language setting. Even my wife needs that (she's a native Japanese). Just switching everything to JA is simply not an option, and shouldn't be necessary if just UTF-8 could do the right thing. Granted, it does, to a certain point. But sometimes there are issues which I haven't been able to work around.
What does that actually mean, "works"? I don't know how that behaves in Google sheets or Excel. Is it evaluated exactly once the first time the formula is entered? Every time you focus the input? Is the dice rerolled when a1 or a2 is modified? What?
Hi 8n4vidtmkvmk, the algorithms for evaluating spreadsheets are surprisingly tricky mainly because of the dependencies. The dependencies are only know at runtime and in Excel are lazy evaluated. So things like `IF(condition, value1, value2)` would evaluate first the condition if it is true it will evaluate value1 but not value2. So things that in other programming languages are a circular dependency are not so in Excel.
The problem of computing the dependencies might be solved by topological sort. The complication of the runtime dependencies is made worse by having dependencies that change every time (or that their outputs do not dependency solely of their inputs) like random functions or date functions. An optimization while evaluating a spreadsheet would be to only compute those cells that depend on cells whose value changed. If you do that you might miss on those volatile functions.
I realize I am most likely babbling too much.
Yes, volatile functions like RANDBETWEEN get evaluated each time a cell changes. They don't get evaluated when you focus on them.
For $1000 per month you can get a c8g.12xlarge (assuming you use some kind of savings plan).[0] That's 48 cores, 96 GB of RAM and 22.5+ Gbps networking. Of course you still need to pay for storage, egress etc., but you seem to be exaggerating a bit....they do offer a 44 core Broadwell/128 GB RAM option for $229 per month, so AWS is more like a 4x markup[1]....the C8g would likely be much faster at single threaded tasks though[2][3]
Wouldn't c8g.12xlarge with 500g storage (only EBS is possible), plus 1gbps from/to the internet is 5,700 USD per month, that's some discount you have.
If I try to match the actual machine. 16G ram. A rough estimate is that their Xeon E3-1240 would be ~2 AWS vCPU. So an r6g.large is the instance that would roughly match this one. Add 500G disk + 1 Gbps to/from the internet and ... monthly cost 3,700 USD.
Without any disk and without any data transfer (which would be unusable) it's still ~80USD. Maybe you could create a bootable image that calculates primes.
These are still not the same thing, I get it, but ... it's safe to say you cannot get anything remotely comparable on AWS. You can only get a different thing for way more money.
That's not dedicated 48 cores, it's 48 "vCPUs". There are probably 1,000 other EC2 instances running on those cores stealing all the CPU cycles. You might get 4 cores of actual compute throughput. Which is what I was saying
That's not how it works, sorry. (Unless you use burstable instances, like T4g) You can run them at 100% as long as you like, and it has the same performance (minus a small virtualization overhead).
Are you telling me that my virtualized EC2 server is the only thing running on the physical hardware/CPU? There are no other virtualized EC2 servers sharing time on that hardware/CPU?
If you are talking about regular EC2 (not T series, or Lambda, or Fargate etc.) you get the same performance (within say 5%) of the underlying hardware. If you're using a core, it's not shared with another user. The pricing validates this...the "metal" version of a server on AWS is the same price as the full regular EC2 version.
In fact, you can even get a small discount with the -flex series, if you're willing to compromise slightly. (Small discount for 100% of performance 95% of the time).
This seems pretty wild to me. Are you saying that I can submit instructions to the CPU and they will not be interleaved and the registers will not be swapped-out with instructions from other EC2 virtual server applications running on the same physical machine?
Yes, I am aware. M4 Max would need 4 LPCAMM2 modules, and a hypothetical M4 Ultra would need 8. This sounds unrealistic (especially 4 modules in a laptop), which is why I mentioned a proprietary connector instead. There is precedence for this type of thing in the Mac Studio where the SSD NAND chips are on proprietary removable modules.
No it doesn't, it says "Cream". That's the issue.
reply