Hacker News new | past | comments | ask | show | jobs | submit | rydgel's comments login

Anyone knows where to find that Blade Runner wallpaper?


This one? https://www.reddit.com/r/wallpapers/comments/ao25bk/blade_ru...

It was a simple google search away...


Thanks


it is


Because it's PHP I guess


It's not well known or advertised, but it can be used with other langs and frameworks too:

- Rails

- AdonisJs

- ASP.NET Core

- CakePHP

- CanJS

- Clojure

- CodeIgniter4

- ColdBox

- Django

- Go

- Masonite

- Mithril.js

- Node.js

- Phoenix

- PSR-15

- Statamic

- Symfony

- WordPress

- Yii2

- Flask

source: https://inertiajs.com/community-adapters


I remember making Flappy Bird in Haskell with FRP. It was a very different but pleasant experience.


That's called a shadow.


The author complains Arch doesn't have proper tested kernel, with the Intel flickering issue, but Fedora did have the exact same issue. That's where LTS kernels can be useful.


I'm glad we have a self-hosted Gitlab. Sure you need to do a bit of setup and configuration, but it's worth in the long run.


It's a bit of a weird thing as the whole word tries (or tried) to move away from on-prem stuff. Like Jira stopping support for non-cloud versions etc.


Probably Atlassian was sick of people never upgrading their old installations and getting hacked for it, and people did not upgrade because it is quite a hassle in the first place, not to mention plugins breaking allll the time.

Oh and because cloud forces continuous payment whereas prior many customers simply bought a one year license and went on without renewing support.


We're running a selfhosted Gitlab Premium since 2019. The only two times in the last 3 years we had issues with artifacts not being deleted (causing nightly backups to become 500gb, will be fixed in the next version) and some out of date apt certs to run "apt update". Otherwise, I update Gitlab every month without problems.


Gitlab is a breeze to upgrade when using the Docker distribution. Swap the version number in Kubernetes or the systemd unit file (if you're using naked Docker), restart the service, that's it...

Atlassian's docker images are similarly easy to use, but with everything Atlassian you have a veritable ecosystem of plugins of which almost none are open source so you are out of luck if there are incompatibilities.


Doesn't it cause more headaches?

I've had experiences where devops minded people I've worked with have wanted to self-host services such as Bitwarden. Sure, it will be cheaper and you will have noone to blame but yourself, but once things go bad they go really bad. It's also another thing to keep eyes on.

I guess similar argument could also be extended to self-hosted clouds. Seems like it could take away a lot of focus and energy from working on the product itself.


> Doesn't it cause more headaches?

No. You update it on patch day (or when a big CVE comes out that you are actually impacted by) and know exactly what goes wrong and when. If you can't solve it, you roll back. When Github (or a part of it) goes down, you know nothing and with persistent issues there's no way to solve it either.

A company of the size where "but we have to scale" is an actual issue should self-host. A SaaS solution is a risk that you cannot mitigate.


A lot of large web services outages (such as GitHub, Azure Active Directory, Slack, etc) are purely caused by these services having to scale to the entire world, with all the complexity and moving parts it entails.

Self-hosting inherently mitigates that problem because you now need to support less than 0.1% of the load of the worldwide service.

It also puts you in control of maintenance and updates - you can choose to make changes outside of business hours so that nobody is affected if you screw up. Developers at SaaS services can't easily do that because it's always business hours in some parts of the world, and may not be motivated to do it anyway even if it was possible with some effort.


Both Spotify and Discord are not on AWS


Yet they're experiencing outages at the same time. If they don't share a platform then that actually makes it look less like a coincidence I'd say, no? (still not saying it means anything, maybe they have some joint dependencies that are down for completely unrelated reasons)


People are falsely reporting, just look at how many comments on the downdetector cloudflare page are actually about Discord/Spotify.


Why would you consider microservices if you are only 12 developers?


Because the team might already be comfortable working in that way? Because certain parts of the application might require specialised implementations and very natural lines of separation fall out?

I’m in a team of 4 and the few API’s we expose would be considered microservices. We did that because it was easiest and fastest for us to build and maintain and the features we provide were all quite distinct.


Most of the conversation so far has focused on the development benefits of microservices (decoupling deployments, less coordination between teams, etc). Small teams don't really have this problem, but there are other benefits to microservices. One of the biggest is scaling heterogeneous compute resources.

Suppose, for example, your webapp backend has to do some very expensive ML GPU processing for 1% of your incoming traffic. If you deploy your backend as a monolith, every single one of your backend nodes has to be an expensive GPU node, and as your normal traffic increases, you have to scale using GPU nodes regardless of whether you actually need more GPU compute power for your ML traffic.

If you instead deploy the ML logic as a separate service, it can be hosted on GPU nodes, while the rest of your logic is hosted on much cheaper regular compute nodes, and both can be scaled separately.

Availability is another good example. Suppose you have some API endpoints that are both far more compute intensive than the rest of your app, but also less essential. If you deploy these as a separate service, a traffic surge to the expensive endpoints will slow them down due to resource starvation (at least until autoscaling catches up), but the rest of your app will be unaffected.


So sure, maybe make the ML model a separate service, but you don't really have the same driver for other services; state-less server processes tend to need the same type of resources only in different amounts, and you don't really gain anything by splitting your work load based on the words used in your domain description.

Real-world monoliths often do have some supporting services or partner services that they interact with. That doesn't mean you need a "micro-service architecture" in order to scale your workload.


Well, no. You can deploy the same monolith to two different clusters with different resource configurations.

In fact, this is what you usually do with "worker" nodes that do background jobs.

And you can always have feature flags/environment variables to disable everything you don't need in a given cluster.


I'm not saying you have to use microservices to solve these problems, just that they are potential reasons why you might want to, even with a small team of developers. I would also argue that if you're deploying the same codebase in two different places and having it execute completely different code paths, you're effectively running two separate services. Whether or not you decide to deploy a bunch of dead code (i.e. the parts of your monolith that belong to the "service" running on the other cluster) along with them doesn't change how they logically interact.


It can sometimes make sense even that small, like if team has different geographical locations and/or time zones.

And that I think how you should approach micro services, to solve an organizational problem, not use it for solving a technical problem.


This is weird to write this. What's the point? Seems like someone got really mad and needed to be a bit passive-aggressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: