This is a rundown on some points on why I do prefer to use LXC over Docker, show the difference of both technologies and why I do think that LXC should be the one ruling the container world.
As a starter, and for me one of the killer choices of LXC over Docker is that LXC does not aim to be a ‘single process container’. This means that you can run OpenSSH, Syslog, Puppet agent, monitoring and whatever other process you require to run on your container to keep it manageable and updated. You can configure the networking whatever way you want or like, be it DHCP, Static IP Addressing, you can even run an OpenVPN client on it and have it connected to a central OpenVPN server if you wish so, thing that you will struggle to do with a regular Docker container, since you are not supposed to run more than one process in your container.
LXC HAS storage persistency. You don’t need to rebuild a whole image if you want to change a setting on your application. You don’t need to create a persistent storage container to have persistent storage data on your container. This brings so much more use cases for containerizing your applications and servers, making them more flexible and, in the end, more useful. I still have to wrap my head around this with Docker, I don’t understand how can one not have storage persistency without loosing horrible hours of their time shutting down the container, rebuild with the new options/files and launch it again and test.
Dockerfile sucks. Puppet rocks. You want to build your containers and infrastructure with Puppet, Ansible or Salt, not build it with a Dockerfile. Dockerfile sucks. Heck you can even build your container with a bash script, it is more flexible and useful than a Dockerfile.
You don’t need to re-engineer the whole deployment and infrastructure side if you want to migrate to containers from the old style virtualization. You can see LXC as a lightweight VM, without the supervisor layer in the middle.
LXC can use ZFS and LVM natively, makes it so easy to manage the storage with a real filesystem :-)
NETWORKING! LXC uses a bridged interface, native in Linux. You can expose that bridged interface to the network or keep it behind NAT. If you need the latter you forward the ports with iptables. You configure the network part of the container using the native OS tools inside that container. You can communicate between two containers using regular network protocols. In Docker you need to fuss around with options passed to the containers, you need to do ‘container linking’ with naming passed to them in order for two containers to be able to communicate with each other, meaning that two linked containers could only be run from the same Docker host and not across your infrastructure. IF you want to do that, you need to do some magic with key store database values and whatnot, confusing, unnecessary and far from being flexible or as performant as the Linux networking stack.
Container images. With LXC you can use any LXD server to store and publish images, there are several other techniques, point is that with Docker if you want to keep your set of images private or run your own image server on premises, you have to pay for it.
The feeling that I have is that Docker is trying to re-invent the wheel with their own filesystem to manage the images, the whole networking part and paradigms that do not need to exist or bring anything good to the Open Source ecosystem.
Sure, I do believe containers bring a lot of flexibility and ease of deployment of applications, and Docker has been paving the way of bringing that world to be known to everyone. I hear more horrible stories of people trying to run Docker in production than the the ones that I hear about success cases. Maybe due to the fact that they are changing the paradigm? Maybe because they are trying to reinvent the wheel?
Docker has a pretty good ecosystem around it to tone down some of its flaws like Kubernetes and Mesos, but I don’t feel that it is enough.
In resume, I see LXC more like a very tiny and lightweight Virtual Machine, without the hypervisor layer.
It can be used in a more flexible way than Docker images, using native tools and technologies already being used by Linux. It can run Docker :-) or any other application inside a container for that matter. Networking is all Linux native, no fuss, no mess. I see LXC being more hands-on with the Linux Kernel than Docker is.
Now, the million dollar question which I will be trying to answer in the next months is: will we be able to run LXC in Production?
In the near future I will try to explain where we are and where are we trying to move, along with the choices made along the way, being containers one of the big choices to be made along with some other nifty details.