Hacker News new | past | comments | ask | show | jobs | submit | mvdwoord's comments login

Would it be reasonable to say that BMI is a bad measure at an individual level, especially at values close to ideal range but at the same time it is a useful measure at population scale? As stated on the NHS Scotland website:

"BMI is used to categorise people’s weight. BMI charts are mainly used for working out the health of populations rather than individuals.

Within a population there will always be people who are at the extremes (have a high BMI or low BMI).

A high or low BMI may be an indicator of poor diet, varying activity levels, or high stress. Just because someone has a ‘normal BMI’ does not mean that they are healthy.

BMI doesn’t take account of body composition, for example, muscle, fat, bone density. Sex and other factors which can impact your weight can also lead to an inaccurate reading. As such a BMI calculation is not a suitable measure for some people including children and young people under 18, pregnant women and athletes."


It's only useful at scale if people at scale are unhealthy.

If the norm was to be like OP, then BMI would not be useful at scale.

Given that you need to know the outcome to determine if the measure is valid, it rather defeats the purpose of using the measure at all.


Not really, it’s one variable amongst many. If the general population also has poor diet, we can assume the BMI is not because everyone has an exemplary physique

many such offerings around, it seems. What I am looking for however, is a tool which I can use / script against. My use case is to produce (relatively straightforward) db diagrams from some model descriptions I have (part of another process). I have table names, column names, and relationships in a memory structure, and want to draw an ER like diagram. Currently looking at producing this wit plantuml, generating the puml file form my data, then running plantuml to generate the png / svg.

Looking around i find most tools in this corner are either full fledged DB design tools with their own editor, but no API. The others like this and things like dbdiagram.io usually are focused online only, which is not an option for me.

Any suggestions greatly appreciated...


Have you tried schemaspy?


looking at it, it seems to be geared towards connecting to a database. My needs are to generate a DB like schema image from a custom in memory represntation of tables, columns and relations. Thanks for the tip though, it does seem like a useful tool, akin to plantuml in that it functions somewhat as a graphviz preprocessor iiuc.


Don't have my work laptop at hand, but I'm quite sure that you can manually order regular folders as well as favorites by simple drag and drop. In our current version at least.


It is indeed possible to reorder the regular folders as well. It's just not a favorites specific feature.


Also comes across as a relatively useless metric. Even if right, which they likely are as far as I'm concerned, the statement says nothing about how much it deteriorates health (as opposed to improving) and besides that, for most people optimized health is hardly the goal of life.

To what degree can we even measure second or nth order effects of human behavior? This just seems like debating a perfectly spherical cow.


I think there’s some context around this. It used to be every once in a while there would be a study or news story about how moderate drinking would actually improve health (“Drinking may be good for you!”).

That has largely been debunked, though.


But do these studies incorporate the context and social effects of having an occasional drink, or just the alcohol itself? I’m guessing the latter.

It wouldn’t be surprising if an occasional glass of wine with friends leads to less stress and ultimately better health outcomes vs. avoiding the social occasion entirely.


Yeah, that's my line of thinking as well. All these isolated metrics on health are only useful to a degree imho.


Hi there neighbor!

Ended up there for more or less the same reason. Shame there isn't more hosted BSD options around.


The solution is cash.


I own a couple of sonos speakers, and the idea is still great, imho. However, the horrendous way they treat the app, dark patterns, overall enshittification... I consider my speakers a write off, and occasionally use one or two of them as a makeshift internet radio. Other than that, I sincerely hope the company burns to the ground.


I tried this but unfortunately you do not gain any vertical space as the tab bar is still in use as window title bar (or whatever the proper names are for this). The vertical tab bar is also fixed width (!) and to top it off, after testing nightly for an hour, I closed and reopened the browser and my pinned tabs were gone.

I will wait until they figured this out properly. I know it's "nightly" and "labs" so not a complaint per se, but an observation.


Not an opinion on the topic, but technically they could check for copyright issues before inserting it into the training data? How that would work, at those volumes, or if it is feasible, no clue. Have they tried? I doubt it.


They could never economically achieve the required volume of training data from licensed data alone. The model requires plagiarizing the Internet.


What if they do a "clean room approach" of sorts. One non-public LLM is trained on copyrighted material and creates its own articles about stuff. Then a public LLM is trained on these articles. It appears to work for people-based processes.


The problem with that approach is that any biases the original model has will be amplified during the training process, and cause it to have issues.


> It appears to work for people-based processes.

Machines aren't people, and workarounds based on the affordances given to people should not work for machines, because machines should not be given those affordances.


I see no reason to care what affordances someone gives to an (artificial) machine that I don't own.


There are two conditions for AI working at all.

1) Access to all the internet's data without restriction, including all books and research papers.

2) The model isn't controlled and owned by a for-profit company, it has to be an intra-country initiative run for the benefit of mankind.

Anything else is a wild misuse and bastardisation of AI. We're already seeing all the issues inherent in our current approaches.


There could be some methods they can employ, but I don't see any of them being remotely foolproof. E.g. if they ask NYT to provide them with a list of content hashes to check against or the content itself to match similarity, but it's really easy for plagiarisers to use already existing open source LLMs to slightly modify content in different places in automatic fashion. And they would be incentivised to do that as well.

I don't see right now how this battle could be won in theory.

There's going to be bunch of content farms, plagiarisers using their open source LLM automation tools, which essentially launder the information from other sources. Even if you denylist them, it's arbitrary for them to spawn further instances.

Eventually the only way would be to not use the Internet at all, and only use allowlisted sources. But that's quite ridiculous to me.


I found the text and workbooks for my mathematics courses at the Open University in the Netherlands absolutely fantastic. They are created / supervised by a famous (educational) mathematician in NL named Jan van de Craats.

The method was designed for self study, and the absolute best I had ever worked through. Perhaps material from other similar institutes are of similar quality?

https://nl.m.wikipedia.org/wiki/Jan_van_de_Craats


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: