TL;DR
Docker and Linux containers result in more dense VMs per physical servers, increasing the network load per physical server and developers use it to run more VMs than ever before.Also, there is no vSwitch (that is the most important peace of information).
What is Docker?
Docker is an echo system built on top Linux containers. To tell the tale, we need to start with Hypervisors.
Hypervisors
The "regular" virtualization is a hardware virtualization. That means that a hypervisor such as ESX, or even your laptop running vmware/vbox, emulates several virtualized physical servers running side by side on a single physical machine.
Notice that each virtual machine is running it own OS. That is wasteful. Especially because it is very rare to find two applications running inside a single server, so for each application, we run the OS too.
The plus side is that you can run any mix of OSes side by side on the same physical server.You can run Windows, Linux, Solaris, IOSv, ASAv, CSR1000v, vMX, Alteon VA, F5, Vyatta, etc.... concurrently on one physical server.
Linux Containers
It looks very similar to the previous diagram, isn't it? I just changed the text inside the blocks :)
Now everything is running on a single Linux kernel. The applications run on top LXCs. And here comes the big difference. LXCs are part of the Linux kernel. LXCs provide a lightweight VMs, all sharing the same OS. Only one OS is running for all containers.
LXC is to Linux containers/namespaces/layered filesystem as VMWare is to ESXi, vmtools, etc... LXC is an umbrella term for all what it takes to run Linux containers.
Notice the "all sharing the same OS". There is only one LXC per kernel. Each "VM" is called a container. Each container has its files, its users, and its networking.
It should ring a bell to us, networking engineers. It is just like VRFs. We do not need to run a full blown IOS per each VRF. Same goes with LXC. We do not run a full blown OS (Linux) for each VM; LXC just creates isolation same as VRFs.
LXCs compared to Hypervisors results not only that we can cram 10 times more "VMs" on the same hardware, not only that networking is much faster per CPU cycle, not only that containers save a lot of disk space, not only that containers saves memory (one OS) but that it takes less than 200ms to create and run a new container.
Docker
Docker is a set of tools, utilities and repositories:- Deploying and running Linux containers in a very easy way.
- Ease the life of developers, QA, and Op teams. It allows all of them to use the same execution environment.
What does Docker/LXC mean for networking engineers?
A lot more VMs
If Hypervisors brought us virtualization sprawl, imagine what LXC/Docker will do!VMWare made it much cheaper and easier to create new servers, compared to physical servers. Docker/LXC make it even cheaper and easier.
That means more endpoints in the data centre.
More VMs per physical server
Being able to run more VMs per server, means that we will see more bandwidth consumed per physical server.Dynamic DC
If it is so easy and fast to spawn and a new VM/container, we might start seeing more VMs created and destroyed on the fly.No vSwitch
The default networking model for Docker is nothing but standard.For network engineers, with VMWare nothing changed compared to the physical world. Servers (VM) are connected to switches (vSwitche). Server's switches (vSwitche) are connected to other switches (real switches) using Dot1Q uplinks.
With Docker the is no such concept as vSwitch (at least not by default, or even not built-in as an integrated option).
On part II of Docker networking, I'll explain the default Docker networking model.