Hacker News new | past | comments | ask | show | jobs | submit login
LXC vs. Docker (earthly.dev)
217 points by lycopodiopsida on Feb 18, 2022 | hide | past | favorite | 143 comments



LXC via Proxmox is great for stateful deployments on baremetal servers. It's very easy to backup entire containers with the state (SQLite, Postgres dir) to e.g. NAS (and with TrueNAS then to S3/B2). Best used with ZFS raid, with quotas and lazy space allocation backups are small or capped.

Nothing stops one from running Docker inside LXC. For development I usually just make a dedicated priviledged LXC container with nesting enabled to avoid some known issues and painful config. LXC containers could be on a private network and a reverse proxy on the host could map to the only required ports, without thinking what ports Docker or oneself could have accidentally made public.


We do something similar with btrfs as the filesystem. There have been some issues with btrfs itself, but the LXC side of this has worked pretty good. Any significant storage (such as project directories) is done with a bind mount into the container, so that it is easy to separately snapshot the data or have multiple LXC containers on the same host access the same stuff. That was more important when we were going to run separate LXC containers for NFS and Samba fileservers, but we ended combining those services into the same container.


Good comment. It was a revelation to me when I used Proxmox and played with LXCs. Getting an IP per container is really nice.


It's an annoying that you can only make snapshots on a stopped container. With VMs it works in a running VM.


That highly depends on the underlying storage. If it is something that supports snapshots (ZFS, Ceph, LVM thin) then it should work fine, also backups will be possible without any downtime as they will be read from a temporary snapshot.


Even with ZFS you still have to wait for RAM to dump, haven't you? And it will freeze at least for the dump write time. Do they have CoW for container memory?

But even if they had, the RAM snapshot needs to be written, but without freezing the container. I would appreciate an option when I could ignore everything that was not fsyned, e.g. Postgres use case. In that case the normal ZFS snapshot should be enough.


RAM and other state can be part of a snapshot for VMs, in which case the VM will continue right where it was.

The state of a container is not part of the snapshot (just checked), as it is really hard to capture the state of the container (CPU, network, all kinds of file and socket handles) and restore it because all an LXC container is, is local processes in their separate cgroups. This is also the reason why a live migration is not really possible right now, as all that would need to be cut out from the current host machine and restore in the target machine.

This is much easier for VMs as Qemu offers a nice abstraction layer.


Also can't do live migrations or backups and moving storage around is a headache.

We've pretty much stopped using LXC containers in Proxmox because of all the little issues.


Dumb question, but what have you replaced it with? I’ve been working on setting up Proxmox/LXC on a box at home as we speak — but if it’s not worth the headache I’ll stick to VMs.


I'm not who you asked but I use them for different purposes. There are some quirks here and there. Never anything that won't run, but it could potentially add extra troubleshooting.

Things that got me recently, just off the top of my head:

- htop sees memory usage incorrectly

- Docker tries to use overlay2 on ZFS which fails (I think, I needed to create and mount an ext4 volume for reasons)

- Hashicorp Vault needed disable_mlock because I believe LXC blocks the syscall.

On the other hand, I like my Samba file server in a container though, because it is much easier to share storage from the host into LXC than a VM.

LXC and VMs both have pros and cons.


Weird, I just tested on my proxmox instance and I was able to create a snapshot of a running container (PVE 7.1-10)


Yeah, I haven't got these issues either.


> LXC via Proxmox is great for stateful deployments on baremetal

Reminds me of (now defunct?) flockport.com

They had some interesting demos up on YouTube, showcasing what looked like a sandstorm.io esque setup.


You've basically described my homelab set up here.

Proxmox, a few LXC, each with their own containerisation running.


Apples to oranges.

LXC can be directly compared with a small, and quite insignificant, part of Docker: container runtime. Docker became popular not because it can run containers, many tools before Docker could do that (LXC included).

Docker became popular because it allows one to build, publish and then consume containers.


I would say docker's killer feature is the Dockerfile. It makes it understandable, reproducible and available to a broad range of people.

At least they mentioned it in their apple:oranges comparison.

there's also the global namespace thing. "FROM ubuntu:18.04" is pretty powerful.

I run a proxmox server with LXC but if I could use a dockerfile or equivalent, my containers would be much much more organized. I wouldn't like to pull images from the internet however.


Hashicorp Packer is a nice way to declare container builds as code. The containers are reproducible as far as I can tell. The biggest advantage with Packer though, is that you can create several different types (dozens) of container and VM images (docker/OCI, LXC/LXD, qcow2, AWS, Google cloud, ...) from a single declaration.


Understandable? Yes.

Reproducible? No. Most Dockerfiles are incredibly unreproducible.


I think it's more reproducible like a recipe.

Yes, baking doesn't work the same at sea level vs say denver colorado @ 1 mile, but pretty much you can share your recipe with friends and save them lots of time.


Most are not, that's true.

It is possible to get to reproducibility though, by involving lots of checksums. For example you can use the checksum of a base image in your initial FROM line. You can download libraries as source code and install them. You can use package managers with lock files to get to reproducible environments of the language you are using. You can check checksums of other downloaded files. It takes work, but sometimes it is worth it.


That work is hard and the tooling is bad and it only takes one mistake, like forgetting to suppress the mtime of your files.

Docker was good enough, simple enough, and got the job done. It won for good reason, but the next generation should learn from it's mistakes.


> Docker became popular because it allows one to build, publish and then consume containers.

True, but Docker is an awful choice for those things (builds are performed "inside out" and aren't reproducible, publishing produces unauditable binary-blobs, consumption bypasses cryptographic security by fetching "latest" tags, etc.)


It's still superior to everything that came before it (at least, that I'm aware of), and cleared the "good enough" bar. Actually, I'm still not aware of anything that solves those issues without making it way harder to use - ex. nix addresses your points but has an awful learning curve.


> builds are performed "inside out"

Docker supports multi-stage builds. They are quite powerful and allow you go beyond the "inside out" model (which still works fine for many use cases).

> ...and aren't reproducible

You can have reproducible builds with Docker. But Docker does not require your build to be reproducible. This allowed it to be widely adopted, because it meets users where they are. You can switch your imperfect build to Docker now, and gradually improve it over time.

This is a pragmatic approach which in the long run improves the state of the art more than a purist approach.


Can you expand on how builds aren't reproducible?

I though Dockerfile ensured that builds are indeed reproducible?


Running the same script every time doesn't necessarily guarantee the same result.

Lots of docker build scripts have the equivalent of

  date > file.txt
or

  curl https://www.random.org/integers/?num=1&min=1&max=1000&col=1&base=10&format=plain&rnd=new > file.txt
Buried deep somewhere in the code.

But yeah I don't see any reason why you couldn't theoretically make a reproducible build with Docker.


Docker images actually contain the timestamp each layer was built at, so are basically de-facto non-reproducible.

Buildah from Red Hat has an argument to set this programmatically instead of using the current date, but AFAIK there's no way to do that with plain old docker build.


The nixpkgs workaround is to build images with build-date == 1970-01-01. Annoying but reproducible.


If you have dynamism like this in your build, doesn't that imply that no build system is reproducible?


My point was just that Docker doesn't ensure reproducibility. Whether a build is reproducible or not depends on the steps used, so its possible but not "ensured".

If you wanted to guarantee reproducibility, a hypothetical purely-functional build system could do that. I personally don't think that's necessary though, or that it would be worth the requisite trade-offs.


Yes, but previous commenter said

> I [thought] Dockerfile ensured that builds are indeed reproducible?

The example straightforwardly disproves that.


Would that not be cached? It would not actually run that date command again, right? unless you change that date line itself, in the Dockerfile.


If you’re using a cached version, you haven’t really reproduced anything but the cache itself. Reproducibility means creating the same output given the same input, every time, regardless of some local cache.

That said, the above example isn’t specific to Docker.


Or if you change anything that comes before that line.


> builds are performed "inside out" and aren't reproducible

This is probably a good argument because of how hard it is to do anything in a reproducible manner, if you care even about timestamps and such matching up.

Yet, i'd like to disagree that it's because of inherent flaws with Docker, merely how most people choose to build their software. Nobody wants to use their own Nexus instance as a storage for a small set of audited dependencies, configure it to be the only source for all of them, build their own base images, seek out alternatives for all of the web integrated build plugins etc.

Most people just want to feed the machine a single Dockerfile (or the technology specific equivalent, e.g. pom.xml) and get something that works out and thus the concerns around reproducibility get neglected. Just look at how much effort the folks over at Debian have put into reproducibility: https://wiki.debian.org/ReproducibleBuilds

That said, a decent middle ground is to use a package cache for your app dependencies, specific pinned versions of base images (or build your own ones on top of the common ones, e.g. a customized Alpine base image) and multi stage builds, you are probably 80% of the way there, since if need be, you could just dump the image's file system and diff it against a known copy.

Nexus (some prefer Artifactory, some other solutions): https://www.sonatype.com/products/repository-oss

Multi stage builds: https://docs.docker.com/develop/develop-images/multistage-bu...

The rest 20% might take a decade until reproducibility is as user friendly as Docker currently is, just look at how slowly Nix is adopted.

> publishing produces unauditable binary-blobs

It's just a file system that consists of a bunch of layers, isn't it? What prevents you from doing:

  docker run --name dump-test alpine:some-very-specific-version sh -c exit
  docker export -o alpine.tar dump-test
  docker rm dump-test
You get an archive that's the full file system of the container. Of course, you still need to check everything that's actually inside of it and where it came from (at least the image persists the information about how it was built normally), but to me it definitely seems doable

> consumption bypasses cryptographic security by fetching "latest" tags

I'm not sure how security is bypassed if the user chooses to use whatever is the latest released version. That just seems like a bad default on Docker's part and a careless action on the user's part.

Actually, there's no reason why you should limit yourself to just using tags, since something like "my-image:2022-02-18" might be accidentally overwritten unless your repo specifically prevents this from being allowed. If you want, you can actually run images by their hashes, for example, Harbor makes this easy to do by letting you copy those values from their UI, though you can also do so manually.

For example, let's say that we have two Dockerfiles:

  # testA.Dockerfile
  FROM alpine:some-very-specific-version
  RUN mkdir /test && echo "A" > /test/file
  CMD cat/test/file
  
  # testB.Dockerfile
  FROM alpine:some-very-specific-version
  RUN mkdir /test && echo "B" > /test/file
  CMD cat/test/file
If we use just tags to refer to the images, we can eventually have them be overriden, which can be problematic:

  # Example of using version tags, possibly problematic
  docker build -t test-a -f testA.Dockerfile .
  docker run --rm test-a
  docker build -t test-a -f testB.Dockerfile .
  docker run --rm test-a
In the second case we get the "B" output even though the tag is "test-a" because of a typo, user error or something else. Yet, we can also use hashes:

  # Example of using hashes, more dependable
  docker build -t test-a -f testA.Dockerfile .
  docker image inspect test-a | grep "Id"
  docker run --rm "sha256:93ee9f8e3b373940e04411a370a909b586e2ef882eef937ca4d9e44083cece7c"
  docker build -t test-a -f testB.Dockerfile .
  docker image inspect test-a | grep "Id"
  docker run --rm "sha256:8dd9ba5f1544c327b55cbb75f314cea629cfb6bbfd563fe41f40e742e51348e2"
  docker build -t test-a -f testA.Dockerfile .
  docker image inspect test-a | grep "Id"
  docker run --rm "sha256:93ee9f8e3b373940e04411a370a909b586e2ef882eef937ca4d9e44083cece7c"
Here we see that if your underlying build is reproducible, then the resulting image hash for the same container will be stable. Furthermore, someone overwriting the test-a tag didn't break you being able to run the first correctly built image because the tags are just convenience, so you'll be able to run the previous one.

Of course, that loops back to the reproducible build discussion if you care about hashes matching up, rather than just tags not being overwritten.


Re: reproducibility, you're right; yet pretty much all of those arguments can be applied to much simpler, well-established technologies; even something like Make (awful syntax notwithstanding!).

IMHO, the real 'trick' with Docker isn't really container runtimes, images, layers, etc. It's the willingness to avoid system dependencies in favour of doing everything inside a "container image" (AKA .tar.gz). That would seem crazy to a Makefile writer in the 80s, but once we become willing to do this, the actual technique can be implemented using something like Make (with appropriate use of `./configure --prefix` arguments, 'export PATH=...' commands, etc.).

Sure it would be leaky, inefficient, etc. but as you say, the majority of devs wouldn't mind (just like with Docker).

To be clear, I'm not advocating anyone actually try doing this with Make, or whatever. I'm just pointing out that many of the claimed advantages of containers (in general) and Docker (in particular), like being sort-of isolated, or sort-of cross-platform, etc. are actually completely orthogonal to the underlying technology. Instead, those advantages come from the way they tend to be used.

Unfortunately, some of the downsides also come from the way they tend to be used (e.g. putting an entire OS inside a container, rather than just the intended binary + its deps; or using 'latest' tags instead of hashes)

> It's just a file system that consists of a bunch of layers, isn't it?

Exactly. Hence it's hard to check whether, for example, the bin/foo executable contains a patch for CVE-1234, or whatever.

Compare this to e.g. Maven .poms, Nix .drvs, etc. which tell us what went into any particular artifact.

> Actually, there's no reason why you should limit yourself to just using tags, since something like "my-image:2022-02-18" might be accidentally overwritten unless your repo specifically prevents this from being allowed. If you want, you can actually run images by their hashes

Indeed, this is actually a really nice thing about Docker (which has since been incorporated into OCI). However, the tragedy is that it tends to get bypassed in favour of tags, and more specifically just 'latest'.


i agree, docker added version control, online repository, with management, and search they also fixed some of the networking headaches in lxc. Prior to developing containerd docker was based on lxc containers.


The confusion is because LXD is more comparable (build/publish/consume) to Docker, but the command you use to run it is called "lxc", so some people call LXD "LXC".


Isn't that where LXD is supposed to fit in?

https://linuxcontainers.org/lxd/


LXD does all of that since a long time, eg here's a tutorial from 2015: https://ubuntu.com/blog/publishing-lxd-images


I agree on the apples to oranges, but LXC does not directly compare to a container runtime IMHO.. It is a proper engine to be fair, even if it provides much less functionalities as the Docker engine.


More like Apples to Apple Core, but sure.


LXC has been so stable and great to work with for many years. I have had services in production on LXC containers and it has been a joy. I can not say the same about things I have tried to maintain in production with Docker, in which I had similar experiences to [0], albeit around that time and therefore arguably not recently.

For a fantastic way to work with LXC containers I recommend the free and open Debian based hypervisor distribution Proxmox [1].

[0], https://thehftguy.com/2016/11/01/docker-in-production-an-his...

[1], https://www.proxmox.com/en/proxmox-ve


LXD (Canonical's daemon/API front end to lxc containers) is great -- as long as you aren't using the god awful snap package they insist on. The snap is probably fine for single dev machines, but it has zero place in anything production. This is because canonical insists on auto-updating and refreshing the snap at random intervals, even when you pin to a specific version channel. Three times I had to manually recover a cluster of lxd systems that broke during a snap refresh because the cluster couldn't cope with the snaps all refreshing at once.

Going forward we built and installed lxd from source.


I got so annoyed with snapd that I finally patched the auto-update functionality to provide control via environment variable. It's ridiculous that this is what I have to personally go through in order to maintain control of when updates are applied on my own systems.

If enough people were to ever decide to get together and properly fork snapd and maintain the patched version I'd totally dedicate time to helping out.

https://gist.github.com/alyandon/97813f577fe906497495439c37d...


We blocked the annoying snapd autoupdate behavior by setting a http proxy to a nonexistent server. Whenever we had a maintenance window we would unset the proxy, allow the update, then set the set the nonexistent proxy server again.

Very annoying.


this feels both clever and stupid at the same time - not you but the software games you have to play.


theres a lot of wasted effort in the games we play to make software work in prod.


From being a production engineer for many years I can only agree: some of the brightest people of our generation spend most of their time compensating for the poor design choices of others. Not sure if this is better or worse than advertising.


That certainly works too but with my approach you can run "snap refresh" manually whenever you feel like updating.


Agreed, it's the stupidest thing around, especially because Linux is about having control of your own systems, then they force updates down your throat, it goes against a lot of the things I use Linux for.

I just avoid snap and Ubuntu wherever possible now.


> If enough people were to ever decide to get together and properly fork snapd and maintain the patched version I'd totally dedicate time to helping out.

Is that the gist of flatpak?


Makes you wonder whether Canonical has any idea about operating servers. Auto-updating packages is the last thing you want. Doing that for a container engine, without building in some jitter to avoid the scenario you described is absolutely insane.

Who even uses snap in production? If I squint my eyes I can see the use for desktops, but why insist on it for server technologies as well?


Canonical would gladly hand back full control of updates if you pay them for an "enterprise edition" snap store. https://ubuntu.com/core/docs/store-overview#:~:text=Brand%20....


And even then, controlling the package versions is only one of the problems. The bigger problem which isn’t solved with this (as far as I can tell) is not having the machines automatically update due to how the snap software works.


I had a huge argument in 2015 with a guy that wanted to move our every custom .deb package (100+) to Snap, because they had talked with Canonical and it would be the future, Docker would be obsolete. Main argument was to make distribution easier to worker/headless/server machines. Not that Docker is a direct replacement, but Snap is an abomination. They are mostly out of date, most of them requires system privilleges, unstable and the way they mount compressed rootfs is making starts very slow, even on a good machine.

That all being said, LXD is great way to run non-ephemeral containers that behave more like a VM. Also checkout multipass, by Canonical that also makes spinning up Ubuntu VMs as easy as Docker.


pro tip: you can use lxc/lxd to run VMs under the same infrastructure.

I belive it is something as easy as

    lxc launch -vm ubuntu/20.04 myvm


What VM software does this make use of? KVM?


Yes. LXD uses Qemu/KVM for VMs.


Wow, cannot wait to try it! Thanks for the tip.


On my Ubuntu 20 server, I tried setting up microk8s with juju using LXD and my god the experience was horrendous. One bug after another after another after another after another. Then I upgraded my memory and somehow snap/LXD got perma stuck in an invalid state. The only solution was to wipe and purge everything related to snap/LXD.

After that I setup minikube with a Docker backend. It all worked instantly, perfectly aligned with my mental model, zero bugs, zero hassle. Canonical builds a great OS, but their Snap/VM org is... not competitive.


That's strange. I've had no issues using microk8s. Super easy to set up on Ubuntu 20.04.


Note that Canonical has special tools for installing and configuring that stuff (conjureup or something)


That's what I was using, and is largely how Juju is used


Not even kidding, a huge part of what made me move to Arch was that it's one of the few distros that packages LXD. Apparently it's a pain, but I'm forever grateful!


Alpine is another distro that packages LXD. I have Arch on my workstation, but I'm not confident about securing Arch on a server. Alpine, on the other hand, is very much suited to be an LXD host. It's tiny, runs entirely from RAM and can tolerate disk access failures. Modifications to host filesystem won't persist after reboot, unless the admin 'commits' them. The modifications can be reviewed before a commit - so it's easy to notice any malicious modifications. I also heard a rumor that they are integrating ostree.

The only gripe I have with Alpine is its installation experience. Like Arch, Alpine has a DIY type installation (but a completely different style). But unlike Arch, it isn't easy to properly install Alpine without a lot of trial and error. Alpine documentation felt like it neglects a lot of important edge cases that trip you up during installation. Arch wiki is excellent on that aspect - they are likely to cover every misstep or unexpected problem you may encounter during installation.


I did not know there were so few distros packaging LXD [0]. It does not come as a surprise that Canonical would not endorse non-snapded LXD though. And it is sad.

[0] https://repology.org/project/lxd/versions


openSUSE packages lxd


> This is because canonical insists on auto-updating and refreshing the snap at random intervals, even when you pin to a specific version channel.

You can control snap updates to match your maintenance windows, or just defer them. Documentation here: https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--...

What you cannot do without patching is defer an update for more than 90 days. [Edit: well, you sort of can, by bypassing the store and "sideloading" instead: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...]


Why would any sysadmin that maintains a fleet of servers with varying maintenance windows (that may or may not be dependent on the packages installed) want to feed snapd a list of dates and times to basically ask it pretty please not to auto-update snaps?

It is much easier to give sysadmins back the power they've always had to perform updates when it is reasonable to do so on their own schedule without having to inform snapd of what that schedule is.

I really don't appreciate being told by Canonical that I have to jump through additional hoops and spend additional time satisfying snapd to maintain the systems under my control.


My initial reaction was that having schedules upgrades like that would be great, but then 5 seconds later I realized it's much better suited to Cron, SystemD, Ansible (etc.).

I think the reason for auto-updates like this is because selling the control back to us is the business plan. It's the same thing Microsoft does.


it looks like there has been considerable progress packaging in debian bookworm: https://blog.calenhad.com/posts/2022/01/lxd-packaging-report...


Just curious--how do you use LXD in production? It always struck me as something very neat/useful for dev machines, but I had trouble imagining how it would improve production workloads.


Maybe two years ago I wanted to use LXD on a fresh Ubuntu server (after testing it locally).

First they had just moved it to Snap which was not a great install experience compared to good old apt-get, and then all my containers had no IPv4 because of systemd for a reason I can't remember.

After two or three tries I just gave up, installed CapRover (still in use today) and have not tried again since.


The install experience should be the same. Using apt install lxd should give you snap. I've installed LXD quite a few times with different storage and network setups without any issues, so I do find it strange that it didn't work for you.


I was bitten by LXD auto-updating as well. Server was down and I couldn't understand why since I hadn't changed anything.


> The snap is probably fine for single dev machines

It is not good even on single dev machines.


I normally use podman for my dev machine. Unlike LXD, it can run rootless.


I am simple man with simple need, I am perfectly happy with a distro as long I have my editor my terminal and my browser.

I could not bear the snaps on ubuntu always coming back and hard to disable on every update, I gave up and just switched to arch and happy to have control on my system again.

I had a lot of crash running on Ubuntu when running huge rust based test suite doing a lot of IO (on btrfs), never had that issue on arch. not sure why, not sure how I can even debug it (full freeze, nothing in systemd logs) so I guess I just gave up.....


Canonical always backs the wrong horse. Unity, Snap, Mir, Upstart, etc. etc.


Yeah, it's truly terrible. I've had downtime from this as well.


My home server runs Nixos, which is an amazing server operating system: every service is configured in code and fully versioned. I also use this server for development (via SSH), but while Nixos can be used for development, it's relationship with VS Code, its plugins, and many native build tools (Golang, Rust) is very complicated, and I prefer not to do everything the Nix way, which is usually convoluted and poorly documented.

LXD is my perfect fit in this scenario: trivial to install on top of Nixos, and once running, allows for launching some minimal development instances of whatever distro flavor of the day in a few seconds. Persistent like a small VM, but booting up within seconds, much more efficient on resources (memory in particular), and - unlike docker - with the full power of systemd and all. Add tailscale and sshd to the mix, for easy, secure and direct remote access to the virtualized system.


I like the docker way of one thing, one process, per container. LXC seems a bit different.

However, an exciting thing to me is the Cambrian explosion of alternatives to docker: podman, nerdctl, even lima for creating a linux vm and using containerd on macos looks interesting.


Docker can have N processes per container though, just depends how you set up your image


Yes, and it makes sense in some cases. Supervisord is awesome for this.


I recently started using containerd inside Nomad, a breath of fresh and simple air after failed k8s setups!


Oh, Nomad looks interesting. Why should someone reach for it vs K8S?


I have used nomad at my work for a few years now. I'd say where it shines is running stateless containers simply and easily. If you're trying to run redis, postgres, etc...do it somewhere else. If you're trying to spin up and down massive amounts of queue workers hour by hour, use it as a distributed cron, or hell just run containers for your public api/frontend and keep them running, nomad is great.

That said, you're going to be doing some plumbing for things like wiring your services together (Fabio/Consul Connect are good choices), detecting when to add more host machines, etc.

As far as how it compares to k8s, I don't know, I haven't used it materially yet.


Nomad can run a lot more things but it’s not so batteries included.

Nomad is trying to be an orchestrator.

Kubernetes is trying to be an operating system for cloud environments.

since they aim for being different things they make different trade offs.


Cloudflare recently posted a great blog article on how/why they use nomad: https://blog.cloudflare.com/how-we-use-hashicorp-nomad/


That seems weird for some stacks though, like nginx, php-fpm, php. At least I still haven't wrapped my head around what's the right answer for the number of containers involved there.


It is workload dependent. My general way of thinking about it is containing a unit of work. Sometimes that is a single process, others it is a bunch of things. Sometimes I may want to run a database separate from my application code, sometimes not.

How do you scale it, how do you manage it, how will it get deployed, all questions that go into the answer of what should go into it.


The perfect pair

Containerfile vs Dockerfile - Infra as code

podman vs docker - https://podman.io

podman desktop companion (author here) vs docker desktop ui - https://iongion.github.io/podman-desktop-companion

podman-compose vs docker-compose = there should be no vs here, docker-compose itself can use podman socket for connection OOB as APIs are compatible, but an alternative worth exploring nevertheless.

Things are improving at a very fast pace, the aim is to go way beyond parity, give it a chance, you might enjoy it. There is continuous active work that is enabling real choice and choice is always good, pushing everyone up.


Is networking any better with Podman on Docker Compose files? Last time I tried, most docker-compose files didn't actually work because they created networks that Podman doesn't have privileges to setup unless run as root

Afaik, the kernel network APIs are pretty complicated so it's fairly difficult to expose to unprivileged users safely


I enjoy podman. It supported cgroups v2 before Docker and is daemonless.


I use LXC containers as my development environments.

When I changed my setup from expensive Mac Books to an expensive work station with a cheap laptop as front end to work remotely this was the best configuration I found.

It took me few hours to have everything running but I love it now. New project is creating a new container add a rule to iptables and I have it ready in few seconds.


FWIW I do the same thing but with docker.

Exposing the docker daemon on the network and setting DOCKER_HOST I’m able to use the remote machine as if it was local.

It’s hugely beneficial, I’ve considered making mini buildfarms that load balance this connection in a deterministic way.


Do you have any more information about how you're doing this? Whenever I've tried to use Docker as a remote development environment the process felt very foreign and convoluted.


I know what you mean, It depends a little bit on your topology.

If you have a secure network then it’s perfectly fine to expose the docker port on the network in plaintext without authentication.

Otherwise you can use Port forwarding over SSH.

To set up networked docker you can follow this: https://docs.docker.com/engine/security/protect-access/

I’m on the phone so can’t give a detailed guide.


Or you can use a docker as a ssh host using context, or just $DOCKER_HOST=ssh://myremote... See https://stackoverflow.com/questions/44056501/how-can-i-remot...


Hopefully you are doing this securely :)

BTW, no need to expose DOCKER_HOST, you can connect to docker over ssh, e.g. `DOCKER_HOST=ssh://1.2.3.4`.


How do you share code files (set volume mounts) with remote docker via setup?


Is there a benefit to this over SSH or VSCode remote?


Neither SSH not VSCode offer any kind of isolation out of the box.


I mean running docker on the remote machine and just sshing into it. I assume changing the docker host on OSX just means a command is being sent over the network. Just wondering why prioritize "local" development if its all remote anyway.


Setting up SSH inside a container and being able to SSH into the container wasn't so trivial to do, last time I read about it. If I recall correctly, there were also some host system security implications. What do you mean by 'prioritize "local" development'?


"Machine A" SSHes into "Machine B". "Machine B" is running Docker. You run docker commands on "Machine B". The output of the command is returned to "Machine A". I.e. a normal ssh session. At no point do you ssh into a container.


One major limitation of LXC is that there is no way to easily self host images. Often the the official images for many distributions are buggy. For example, the official Ubuntu images seem to come with a raft of known issues.

Based on my limited interactions with it, I'd recommend staying away from LXC unless absolutely neccesary.


> there is no way to easily self host images

When you run lxd init there's an option to make the server available over the network (default: No), if enabled you can host images from there.

    lxc remote add myimageserver images.bamboozled.com
    lxc publish myimage
    lxc image copy myimage myimageserver
    
    lxc launch myimageserver:myimage


That's half the story, I don't want to have to build my own infra to host something simple like an image in a redundant highly available way.

In the end I just settled from hosting on s3, downloading the images with curl (or similar) and running `lxc import`.

The support in configuration management tooling is also pretty limited and the stuff I've used has been fairly buggy, in my opinion, because it's not very popular.


In Proxmox "self hosting" meaning having a folder with LXC images is part of the distribution. You can download templates from online sources and use as images or create your own templates from already running LXCs. Or maybe you mean self hosting in another way?


I've only played around with LXC/LXD a little bit, what are some of the Ubuntu image issues? I did a quick google, but the first results seemed to be questions about hosting on Ubuntu rather than with the images themselves.


In my experience, most issues are related to kernel interfaces which LXC disables inside unprivileged containers, paired with software that does not check if those interfaces are there/work before attempting to use them.

These issues can be observed in the official Ubuntu image and seem to get worse over time. I would recommend to just use VMs instead.


If you feel that the existing images (of lxc-download?) have too many bugs for your liking, you could also try the classic templates, which use debootstrap and the like to create the rootfs.


I’ve been using LXC as a lightweight “virtualization” platform for over 5 years now, with great success. It allows me to take existing installations of entire operating systems and put them in containers. Awesome stuff. On my home server, I have a VNC terminal server LXC container that is separate from the host system.

Combined with ipvlan I can flexibly assign my dedicated server’s IP addresses to containers as required (MAC addresses were locked for a long time). Like, the real IP addresses. No 1:1 NAT. Super useful also for deploying Jitsi and the like.

I still use Docker for things that come packaged as Docker images.


> . It allows me to take existing installations of entire operating systems and put them in containers

Friend, do you have documentation for this process? Please share your knowledge. ^_^


Nothing too spectacular, I’m afraid. I had to consolidate some physical machines, all running Gentoo Linux. For each, I simply created a Gentoo LXC container and then replaced the rootfs (in /var/lib/lxc/NAME/rootfs) with the one from the physical server.

The significant changes from the physical systems were:

* rc_provide="net" in rc.conf because base networking is controlled externally

* rc_sys="lxc" may or may not be necessary

* Disable various net setup services

On the host OS (Debian) I have interfaces like this:

    auto ipvl-main
    iface ipvl-main inet manual
       pre-up ip link add link eth0 name ipvl-main type ipvlan mode l2
       post-down ip link delete ipvl-main
In the container config, they are referenced this way:

    lxc.net.2.type = phys
    lxc.net.2.link = ipvl-main
    lxc.net.2.ipv4.address = 1.2.3.4/29
    lxc.net.2.ipv4.gateway = 1.2.3.1
    lxc.net.2.ipv6.address = abcd::2/128
    lxc.net.2.ipv6.gateway = fe80::1
    lxc.net.2.flags = up
Later on, I removed the dedicated IP address and set up a reverse proxy instead.

Oh yeah, all containers are of course privileged containers. With unprivileged containers, various things may not work as expected.


I never hear systemd-nspawn mentioned in these discussions. It ships and integrates with systemd and has a decent interface with machinectl. Does anyone use it?


> I never hear systemd-nspawn mentioned in these discussions. It ships and integrates with systemd and has a decent interface with machinectl.

I couldn't have said it better. And yes, I use it. Also in production systems.


The big missing feature that's lacking is to pull Docker images and run them without resorting to hacks.



That’s what I use whenever I need a container. So simple and flexible.


Is it accurate to say LXC is to Docker as git is to GitHub, or vim/emacs vs. Visual Studio Code?

I haven't seen many examples demonstrating the tooling used to manage LXC containers, but I haven't looked for it either. Docker is everywhere.


I recently wrote something to clarify my mind around all this [1]. If we assume that by Docker we mean the Docker engine, then I think you might compare it as you said (maybe more in terms as vim/emacs vs. Visual Studio Code as Git is a technology while GitHub is a platform).

But Docker is many things: a company, a command line tool, a container runtime, a container engine, an image format, a registry...

[1] https://sarusso.github.io/blog_container_engines_runtimes_or...


In the first months of Docker, yes. Nowadays, they are different beasts.


lxc launch, lxc list, lxc start, lxc stop, etc....

That's all I've ever needed. Docker is overkill if you just need to run a few containers. There is a point where it makes sense but running a few containers for a small/personal project is not it.


Funnily enough I view it the other way around. LXC is a bit overkill, I just wanted to run a container, not an entire VM.

In my mind you need to treat LXC containers as VMs, in terms of managements. They need to be patched, monitored and maintained the same ways as a VM. Docker containers still need to patched of cause, many seems to forget that bit, but generally they seem easier to deal with. Of cause that depends on what has been stuffed into the container image.

LXC is underrated though, for small projects and business it can be a great alternative to VM platforms.


LXC and Docker comparisons vastly differ depending on the use case and problem segment. I use LXC as a tiny, C-only library to abstract namespaces and cgroups for embedded usage [1]

LXC is a fantastic userland library to easily consume kernel features for containerization without all the noise around it… but the push for the LXD scaffolding around it missed the mark. It should’ve just been a great library and that’s how we use it when running containers on embedded Linux equipment

[1] https://pantacor.com/blog/lxc-vs-docker-what-do-you-need-for...


A while ago, I spent some time to make LXC run in a docker container. The idea is to have a statefull system managed by LXC run in a docker environment so that management (e.g. Volumes, Ingress and Load Balancer) from K8S can be used for the LXC containers. I still run a few desktops which are accessible by x2go with it on my kubernetes instances.

https://github.com/micw/docker-lxc


I know very little about both, but I'm at the mercy everyday with lxc on my chromebook when running crostini (it's like a VM in a VM in a VM in a...) :) - works great though, at some perf cost, and less GPU support.

And still having troubles running most of the docker images out there (either this, or that won't be supported). I guess it makes sense, after all there is always the choice of going with full real linux reinstall, or some other hacky ways.

But one thing I was not aware was this: "Docker containers are made to run a single process per container."


Interesting read, not sure why you compared only these two though.

There are a plenty of other solutions and Docker is actually many things.. You can use Docker to run containers using Kata for example, which is a runtime providing full HW virtualisation.

I wrote something similar, yet much less in detail on Docker and LXC and more as a bird-eye overview to clarify terminology, here: https://sarusso.github.io/blog_container_engines_runtimes_or...


At the end the two are different.. why comparing the in the first place?

“ LXC, is a serious contender to virtual machines. So, if you are developing a Linux application or working with servers, and need a real Linux environment, LXC should be your go-to.

Docker is a complete solution to distribute applications and is particularly loved by developers. Docker solved the local developer configuration tantrum and became a key component in the CI/CD pipeline because it provides isolation between the workload and reproducible environment.”


What are the fundamental differences?


Linux control groups vs a runtime


LXC is quite different from Docker. Docker is used most of the time as an containerized package format for servers and as such is comparable to snap or flatpak on the desktop. You don't have to know Linux administration to use Docker, that is why it is so successfull.

LXC on the other hand is lightweight virtualization and one would have a hard time to use it without basic knowledge of administering Linux.


> Saying that LXC shares the kernel of its host does not convey the whole picture. In fact, LXC containers are using Linux kernel features to create isolated processes and file systems.

So what is Docker doing then??


I've been running my saas on lxc for years. I love that the container is a folder to be copied. Combined with git to push changes to my app all is golden.

I tried docker but stuck with lxc.


I had to switch to docker after LXC have been snaped.


Love to hear I am not the only one enjoying LXC rather than Docker


LXC/LXD being the clear winner.


This would have been an ok article in 2013-2015. Nothing really has changed wrt. these two technologies since.


I think docker grew out of lxc initially(to make lxc easier to use), for now, lxc is light weight but it is not portable, docker can run on all OSes, I think that's the key difference: cross-platform apps. LXC remains to be a linux-only thing.


Just as long as you ignore the linux VM all those docker containers are running in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: