Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.
AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.
So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).
A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host, and it won't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.
A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds, and often even less than a second.
There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.
For more information, check out this set of blog posts which do a good job of explaining how LXC works.
Why is deploying software to a docker image (if that's the right term) easier than simply deploying to a consistent production environment?
Deploying a consistent production environment is easier said than done. Even if you use tools like Chef and Puppet, there are always OS updates and other things that change between hosts and environments.
Docker gives you the ability to snapshot the OS into a shared image, and makes it easy to deploy on other Docker hosts. Locally, dev, qa, prod, etc.: all the same image. Sure you can do this with other tools, but not nearly as easily or fast.
This is great for testing; let's say you have thousands of tests that need to connect to a database, and each test needs a pristine copy of the database and will make changes to the data. The classic approach to this is to reset the database after every test either with custom code or with tools like Flyway - this can be very time-consuming and means that tests must be run serially. However, with Docker you could create an image of your database and run up one instance per test, and then run all the tests in parallel since you know they will all be running against the same snapshot of the database. Since the tests are running in parallel and in Docker containers they could run all on the same box at the same time and should finish much faster. Try doing that with a full VM.
From comments...
Interesting! I suppose I'm still confused by the notion of "snapshot[ting] the OS". How does one do that without, well, making an image of the OS?
Well, let's see if I can explain. You start with a base image, and then make your changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system: as mentioned above, Docker uses AuFS. AuFS merges the different layers together and you get what you want; you just need to run it. You can keep adding more and more images (layers) and it will continue to only save the diffs. Since Docker typically builds on top of ready-made images from a registry, you rarely have to "snapshot" the whole OS yourself.
The --format
option of inspect
comes to the rescue.
Modern Docker client syntax is:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Old Docker client syntax is:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
These commands will return the Docker container's IP address.
As mentioned in the comments: if you are on Windows, use double quotes "
instead of single quotes '
around the curly braces.
Best Solution
Edit:
If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host
host.docker.internal
(instead of the127.0.0.1
in your connection string).If you are using Docker-for-Linux 20.10.0+, you can also use the host
host.docker.internal
if you started your Docker container with the--add-host host.docker.internal:host-gateway
option.Otherwise, read below
TLDR
Use
--network="host"
in yourdocker run
command, then127.0.0.1
in your docker container will point to your docker host.Note: This mode only works on Docker for Linux, per the documentation.
Note on docker container networking modes
Docker offers different networking modes when running containers. Depending on the mode you choose you would connect to your MySQL database running on the docker host differently.
docker run --network="bridge" (default)
Docker creates a bridge named
docker0
by default. Both the docker host and the docker containers have an IP address on that bridge.on the Docker host, type
sudo ip addr show docker0
you will have an output looking like:So here my docker host has the IP address
172.17.42.1
on thedocker0
network interface.Now start a new container and get a shell on it:
docker run --rm -it ubuntu:trusty bash
and within the container typeip addr show eth0
to discover how its main network interface is set up:Here my container has the IP address
172.17.1.192
. Now look at the routing table:So the IP Address of the docker host
172.17.42.1
is set as the default route and is accessible from your container.docker run --network="host"
Alternatively you can run a docker container with network settings set to
host
. Such a container will share the network stack with the docker host and from the container point of view,localhost
(or127.0.0.1
) will refer to the docker host.Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the
-p
or-P
docker run
option.IP config on my docker host:
and from a docker container in host mode:
As you can see both the docker host and docker container share the exact same network interface and as such have the same IP address.
Connecting to MySQL from containers
bridge mode
To access MySQL running on the docker host from containers in bridge mode, you need to make sure the MySQL service is listening for connections on the
172.17.42.1
IP address.To do so, make sure you have either
bind-address = 172.17.42.1
orbind-address = 0.0.0.0
in your MySQL config file (my.cnf).If you need to set an environment variable with the IP address of the gateway, you can run the following code in a container :
then in your application, use the
DOCKER_HOST_IP
environment variable to open the connection to MySQL.Note: if you use
bind-address = 0.0.0.0
your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to setup firewall rules accordingly.Note 2: if you use
bind-address = 172.17.42.1
your MySQL server won't listen for connections made to127.0.0.1
. Processes running on the docker host that would want to connect to MySQL would have to use the172.17.42.1
IP address.host mode
To access MySQL running on the docker host from containers in host mode, you can keep
bind-address = 127.0.0.1
in your MySQL configuration and all you need to do is to connect to127.0.0.1
from your containers:note: Do use
mysql -h 127.0.0.1
and notmysql -h localhost
; otherwise the MySQL client would try to connect using a unix socket.