Hacker News new | past | comments | ask | show | jobs | submit | lkrubner's comments login

This explains everything:

"It’s not practical. Hybrid work may be technically the best route, but it’s also complicated to oversee."

Who cares if workers are productive, when the leadership is clearly less productive? And the leadership's time is extremely valuable. If remote work makes workers more productive, while the leadership is less productive, then remote work is bad for a company, full stop, no other conversation is needed.


If leadership cannot take advantage of, let alone adjust to remote work in 2024, they're not good leaders.


If workers cannot do what the leadership needs, then they are bad workers, they need to be fired.


How would that be possible? Novelty is a known weakness of the LLMs and ideally the only things published in peer-reviewed journals are novel.


Detecting images and data that's reused in different places has nothing to do with novelty.


Wouldn’t it be cool if people got credit for reproducing other people’s work instead of only novel things. It’s like having someone on your team that loves maintaining but not feature building.


Completely irrelevant. Humans cannot survive in the conditions of the Cambrian. Merely going back to the Cambrian is enough to ensure our extinction. No one cares that the CO2 levels were higher then. What matters is that we will all be dead if we go anywhere near those levels.


"The way we discover interesting websites needs innovating, why not let anyone contribute to any webpage?"

I remember there was a website that did this in 1999, using frames to allow people to post comments on any website. The courts shot this down as an illegal infringement of trademark. Does anyone remember the name of that website that did this?


Since the crisis of 2008 the USA has had several trillion dollars in stimulus spending. Very roughly speaking, Europe pursued policies of austerity, whereas the USA followed a more Keynesian route of big spending. On this matter, the Keynesians have been vindicated. In 2007 the GDP of the EU was slightly larger than the USA, but now it has fallen far behind: $16 trillion ($18 trillion if it still had the UK) versus $25 trillion for the USA. And the most important policy difference since 2008 has been large stimulus spending in the USA, versus relative austerity in Europe. More recently, President Biden was able to push through some big infrastructure bills, which should power the USA through the 2020s. (There are some qualifiers to be added about weaknesses in the USA system of funding and allowing construction, in particular the aggressive system of "substantive due process" that allows for any project to be stalled by lawsuits, but despite that, the USA has done better than Europe.)


Most of the difference in GDP was because Dollar was a lot cheaper compared to Euro pre Recession. Also EU just stagnate from 1990 until now it is yearly losing global share of GDP.


"Most of the difference in GDP was because Dollar was a lot cheaper compared to Euro pre Recession."

Yes, but why? There was a theory that massive stimulus spending would weaken the dollar because of the additional debt, but instead, the opposite happened. The USA did more stimulus spending and the dollar became stronger. Many people were surprised by this. Those people need to revisit their initial assumptions.


"hardware startups that have to raise a lot of money before a Series A are... considerably riskier, no?"

Why riskier? They have more of a moat don't they? The large capitalization needed suggests they will face less competition. It's more difficult for competitors to gather the necessary capital to compete against them?

What I've written here was the conventional wisdom for most of 200 years. Much of the Industrial Revolution played out when merely concentrating together capital was seen as an engine of growth. Rockefeller did not need to sell innovative gasoline, he only needed to use cash flow to buy up monopoly positions, one small region at a time, until he had the cash flow to buy up monopoly positions nationwide. Economies-of-scale meant that merely concentrating together capital was a path to greater profitability.

The last 25 years were an aberration, during which time big companies could be built with small initial investments. But over the course of centuries, the opposite was more common.


Is the moat that good for HW? The more commodified your product is, the more you risk losing to undifferentiated foreign competitors who have lower input costs.


And hardware marches on while you are trying to get your widget manufactured and shipped.

I was at a hardware startup in 2013/14. We had our own board design that was very similar to the RaspberryPi + Arduino that we'd prototyped with (we ended up using a iMX233 and an AT Mega 328). While we were debugging a manufacturing fault (out of spec led controllers), Expressif released the SDK for their at the time practically unknown ESP8862 - which meant 90% of our functionality could now be done with a BOM of around $15 instead of the $90 or so ours was costing us.

Our "moat" had been concreted over while we were pulling our hair out trying to ship in time for xmas. (And the business died in arguments, recriminations, fingerpointing, and a lack of ability to find investment to pay for a 2nd production run. And I needed up with another piece of paper saying I had a percentage or two of ownership in something now worth zero dollars...)


I was doing firmware in IoT/M2M for an engineering services company back then and it was the tail end of "slap a radio on [common product]" projects for that company. I saw a few projects end with the arguments and fingerpointing with inevitable laser focus on what the SOW actually said and it always sucked to end that way.


> But over the course of centuries, the opposite was more common.

I believe capital investor like parents invest in the shorter term.


On a slightly different topic, we now know that a surprisingly large number of people suffer daytime "hallucinations without delusion" meaning they see things but they know the things are not real. And I've often wondered if that is some kind of dysfunction of the sleep cycle. If we think of dreams as a kind of hallucination, then is daytime "hallucination without delusion" a kind of dreaming while awake? I'd like to see more research on that question.


I agree with the use of Hetzner. I use them. Very cheap machine, very powerful. Very simple and straightforward. People need to think carefully about the possible expense and complexity of AWS. Hetzner is simple and straightforward.


The problem with the title is simply that the English language allows open compound words. I know many Germans wonder why we allow this. Germans push the words together, for clarity. I've suggested that we use hyphens. Hyphens feel natural in English, and could remove the ambiguity that exists whenever we use open compound words (that is, open-compound-words). In this case "not-comments" would have added clarity.

Likewise, the title "World's longest DJ set" was confusing, because most people will assume that the compound word is "DJ-set". But if you read the whole article, then you realize that a python snake fell on the mixing board and accidentally mixed some tunes. So the compound word was actually "longest-DJ" -- a 2.5 meter python.

We should all consider using hyphens for all compound words.

https://x.com/krubner/status/1828155852773113942


I think the "Longest DJ Set" was intentional wordplay.


As was the title of this article.


>I've suggested that we use hyphens. Hyphens feel natural in English, and could remove the ambiguity that exists whenever we use open compound words

We do use hyphens in English. Well, some of the time, and some of us. I could be wrong, but I do feel that, given my age and also my readings of older texts, that the use of hyphens in this way has become less common, and that this was much more common decades ago to avoid ambiguity.


As always, there's a relevant xkcd: https://xkcd.com/37/


"longest DJ" isn't a compound, it's a noun modified by an adjective, and as such it would be written as two different words in German as well ("längster DJ").


Github won in part because git won. And git won because, for complex sociological factors, the software engineers were able to argue that their needs were more important than the needs of other parts of the companies for which they worked.

For a counter-point (which I've made many times before) from 2005 to 2012 we used Subversion. The important thing about Subversion was that it was fun to use, and it was simple, so everyone in the organization enjoyed using it: the graphic designers, the product visionaries, the financial controllers, the operations people, the artists and musicians, the CEO, the CMO, etc. And we threw everything into Subversion: docs about marketing, rough drafts of advertising copy, new artwork, new design ideas, todo lists, software code, etc.

The whole company lived in Subversion and Subversion unified every part of the company. Indeed, many products that grew up later, after 2010 and especially after 2014, grew up because companies turned away from Subversion. Google Sheets became a common way to share spreadsheets, but Google Sheets wasn't necessary back when all spreadsheets lived in Subversion and everyone in the company used Subversion. Likewise, Google Docs. Likewise some design tools. Arguably stuff like Miro would now have a smaller market niche if companies still used Subversion.

At some point between 2008 and 2015 most companies switched over to git. The thing about git is that it is complex and therefore only software engineers can use it. Using git shattered the idea of having a central version control for everything in the company.

Software engineers made several arguments in favor of git.

A somewhat silly argument was that software developers, at corporations, needed the ability to do decentralized development. I'm sure this actually happens somewhere, but I have not seen it. At every company that I've worked, the code is as centralized as it was when we used Subversion.

A stronger argument in favor of git was that branches were expensive in Subversion but cheap in git. I believe this is the main reason that software developers preferred git over Subversion. For my part, during the years that we used Subversion, we almost never used branches, mostly we just developed separate code and then merged it back to main. Our devops guy typically ran 20 or 30 test servers for us, so we could test our changes on some machine that we "owned". For work that would take several weeks, before being merged back to main, we sometimes did setup a branch, and other times we created a new Subversion repo. Starting new repos was reasonably cheap and easy with Subversion, so that was one way to go when some work would take weeks or months of effort. But as ever, with any version control system, merge conflicts become more serious the longer you are away from the main branch, so we tried to avoid the kind of side projects that would take several weeks. Instead, we thought carefully about how to do such work in smaller batches, or how to spin off the work into a separate app, with its own repo.

A few times we had a side project that lasted several months and so we would save it (every day, once a day) to the main branch in Subversion, just to have it in Subversion, and then we would immediately save the "real" main branch as the next version, so it was as if the main branch was still the same main branch as before, unchanged, but in-between versions 984 and 986 there was a version 985 that had the other project that was being worked on. This also worked for us perfectly well.

The point is that the system worked reasonably well, and we built fairly complex software. We also deployed changes several times a day, something which is till rare now, in 2024, at most companies, despite extravagant investments in complex devops setups. I read a study last week that suggested only 18% of companies could deploy multiple times a day. But we were doing that back in 2009.

The non-technical people, the artists and product visionaries and CFOs and and CMOs, would often use folders, when they wanted to track variations of an idea. That was one of the advantages of having something as simple as Subversion: the whole team could work with idioms that they understood. Folders will always be popular with non-technical people.

But software developers preferred git, and they made the argument that they needed cheap branches, needed to run the software in whole, locally on their machines, with multiple variations and easy switching between branches, and needed a smooth path through the CI/CD tools towards deployment to production.

I've two criticisms with this argument:

1. software developers never took seriously how much they were damaging the companies they worked for when they ended the era of unified version control.

2. When using Subversion, we still had reasonably good systems for deployment. For awhile we used Capistrano scripts, and later (after 2010) I wrote some custom deployment code in Jenkins. The whole system could be made to work and it was much simpler than most CI/CD systems that I see now. Simplicity had benefits. In particular, it was possible to hire a junior level engineer and within 6 months have them understand the whole system. That is no longer possible, as devops has become complex, and has evolved into its own specialty. And while there are certainly a few large companies that need the complexity of modern devops, I've seen very few cases myself. I mostly see just the opposite: small startups that get overwhelmed with the cost of implementing modern devops "best practices", small startups that would benefit if they went back to the simplicity that we had 10 to 15 years ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: