I didn’t know this topic is very hard to Google for until I had to do it! Almost all the topics I landed on used docker-compose to setup two different containers talking to each other on a single node. At a very basic level, this is what I want to do:
- Run a sinatra app on port
9000
- Use nginx as a reverse proxy to sinatra
- Don’t use docker-compose
All the containers will run on a single node. There’s no requirement for them to communicate with other containers running on other machines. Optionally, I’d also want to run multiple such containers; say one nginx container process in front of multiple Sinatra application containers. For what it’s worth, docker-compose provides a very easy way to do these kinds of things using declarative syntax, and in the background it configures a lot of things automatically to make that work.
I recently tried to do this without docker-compose, and in this post I’m gonna share what I learnt.
- Preliminaries
- Container networking
- Reusing host’s network (easy, not recommended)
- Without reusing host’s network
Preliminaries
Directory structure:
|
| - app.rb
| - nginx.conf
| - Dockerfile.sinatra
| - Dockerfile.nginx
Sinatra application
For the example I’ll be using a highly sophisticated hello world application like so:
Note the :port
setting. This will be important in just a while. We’ll run the application using the puma gem (not directly, but just to show the command usage):
Nginx reverse proxy configuration
Let’s start with a basic nginx configuration that’s typical of a reverse proxy setting:
If we were to run the applications directly without containers, this configuration would start nginx on port 80
, and would reverse proxy all the incoming requests to our sinatra application server that runs on port 9000
just fine. But this won’t work in a containerised environment directly and we’ll see why soon. Before that though, here are the docker files I used for both the processes.
Dockerfiles
- One image called
sinatra:web
that pulls in Ruby, installs the sinatra gem and others and runs our app code. - Another image called
sinatra:nginx
that pulls innginx
image. This will be customised with a specific nginx configuration with reverse proxy setup.
Dockerfile.sinatra:
The image is built using:
And it can be run like so:
docker run --rm sinatra:app ⏎
== Sinatra (v2.0.3) has taken the stage on 9000 for development with backup from Puma
Puma starting in single mode...
* Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://localhost:9000
Use Ctrl-C to stop
I’ll share the nginx docker file below to preserve the narrative, but it mostly contains one single line: a COPY
of the config I showed above to the nginx config folder, as documented in on the dockerhub page: dockerhub/nginx
Container networking
The basic premise of containers to isolate the processes running inside them from the host environment. They shouldn’t be able to view/access resources from the host machine directly. This also applies for networking. A container A running on port 9000
, bound to its own internal IP (or 0.0.0.0
which is different) is local to its own view of the environment. A container B won’t be able to directly call localhost:9000
because the localhost
inside of B is its own localhost!
This isolation feature poses a challenge for any proxied applications, where a container needs to either call a network resource in another container, or make itself available to other containers. But with some slight changes in the configuration and docker’s help, this can be achieved trivially. Docker abstracts the network configuration part into something it calls (drum roll) “network”s, accessible via the CLI flag network
. Using the tooling around this, one can:
-
reuse host (parent) machines’s network interfaces, which is similar to running each individual processes uncontainerised. If nginx container has to call the app container, it can do so by calling
localhost:9000
. Thelocalhost
here would be the host’s loopback interface, rather than the container’s. The side-effect of this is that both80
and9000
ports are accessible from the host directly which from an isolation perspective is…leaky. But this would make our code in the example work without any changes. -
Create a separate layer for networking which is shared among the containers. This will create a new network interface using which the containers can be linked.
Caveats: reusing host machine’s interfaces is not suggested for production use, and doesn’t work with docker for Mac, as documented in the tutorial.
Reusing host’s network (easy, not recommended)
The first mode listed above is called “host networking” in docker terminology. Since this reuses the host network, the application settings or Docker settings don’t require any changes. However, if one were to launch multiple sinatra processes for a single nginx backend, each application should run on a different port. To get around this, instead of hard-coding the port number in the app, it could be picked up from an environment variable (ala Heroku’s Procfile script). In our simpler case though, this is how one’d run the containers with host networking:
$ docker run -d --network=host --rm sinatra:nginx
$ docker run -d --network=host --rm sinatra:app
$ curl http://localhost
hello world
$ curl http://localhost:9000
hello world
Both port 80
and 9000
are accessible from the host, just like running the applications uncontainerized.
Without reusing host’s network
One way to achieve inter-container communication without reusing the host’s network interfaces can be (among other things) by “user-defined bridge networking”. Docker defines a default bridge network onto which any spawned container is put on, but that doesn’t come with the benefits of user defined bridge network. One of the major benefit that’s not present in the default bridge is resolution of IP addresses between two containers when calling using the container names.
Each container gets its own IP, so two containers running on the same bridge network can communicate with each other by calling the other’s IP+port combination. But getting the IP for each container during every deployment, and then generating configs on the fly is cumbersome. This is where assigning unique names to containers (by default enforced by docker) comes-in handy. From inside nginx container, instead of specifying localhost:9000
, we can specify sinatra-app:9000
as the upstream, where sinatra-app
is the --name
that’s passed to docker run
when spawning the application container. Container names can be decided upfront and the nginx config could be generated based on those before even building the images.
More differences between the default bridge network and the user-defined bridges are documented well on the official blog.
Default bridge network (default)
A URL like sentry-web:9000
generally is a valid network/internet, provided the OS and/or Network knows how to resolve it to an IP. Typically host resolution requires help from a DNS service either on the network or over the internet if the machine is connected to it. If the host machine is behind a firewall, that request would fail. One way to solve this easily is by using a /etc/hosts
entry to say “sentry-web
actually means this IP”. Editing /etc/hosts
manually is one more step we should be able to avoid, because we need to get the IP of a container once it starts running and then update the /etc/hosts
entry post that.
To help with this, docker provides a flag --link
that does pretty much the same thing internally. To make our setup work with this networking mode, we’d need to change the nginx.conf
file a little bit. Instead of having the upstream pointing to localhost:9000
, we’ll use the name we’re going to use to launch sinatra application’s container:
Build the image again:
docker build --rm -f Dockerfile.nginx -t sinatra:nginx .
So far we’ve bound the sinatra app to the default setting of 127.0.0.1
. This though needs a change when using bridge networking because nginx will call the IP directly instead of going through localhost
. We’ll have to use either the IP of the container, or perhaps binding it to 0.0.0.0
. On linux though, getting the ip and using it in Sinatra is trivial:
This looks hacky enough to me that I wouldn’t suggest using the hostname
exec call thingy on a production environment. May be there’s a easier way that I’m missing. Moving on, building the image:
docker build --rm -f Dockerfile.sinatra -t sinatra:app .
And run the processes:
$ docker run -d -p 80:80 --link=sinatra-web --rm sinatra:nginx
$ docker run -d --name sinatra-web --rm sinatra:app
$ curl http://localhost
hello world
$ curl http://localhost:9000
curl: (7) Failed to connect to localhost port 9000: Connection refused
Note that, as opposed to host networking model, the sinatra application’s ports are not exposed out to the host.
User-defined bridge (recommended)
The --link
option is documented as legacy, and docker provides a way to avoid it. A new bridge network can be created that’ll be used by both the containers:
docker network create --driver=bridge sinatra-net
And use this as the --network
parameter when spawning the containers:
$ docker run --name sinatra-web --network=sinatra-net --rm sinatra:app
$ docker run -p 80:80 --network=sinatra-net --rm sinatra:nginx
$ curl http://localhost
hello world
Unlike the above methods, the order of spawning containers has to be maintained in this case. From what I gather, /etc/hosts
entry is updated only once the container starts running because that’s when the IP gets assigned to the container. Nginx does a sort of pre-boot check for defined upstreams to be communicatable (via same DNS resolution perhaps) and fails if the gethostname(hostname)
call fails.
That’s it! We now have a simple way to run multiple containers with various kinds of networking modes without using docker-compose.
Some articles that helped me understand all of these concepts slightly better:
Networking with standalone containers (official tutorial series)
This tutorial goes though setting up inter-container communication using all the available methods. It also covers the network inspect
command line function that helps in understanding the connections better—for instance, why the ordering is important in a user-defined bridge network.
Practical Design Patterns in Docker Networking (video)
A probably advanced talk on docker networking. Initial part of the talk covers the basic aspects of networking.