Docker and iptables

The firewall must be configured properly to prevent unwanted access which could lead to data loss and exploits. UFW allows you to quickly close and open ports. The following configuration closes all incoming ports and opens 22 (SSH), 80 (HTTP) and 443 (HTTPS):

ufw default deny incoming
ufw allow OpenSSH
ufw allow http
ufw allow https
ufw enable

But Docker will update iptables when you bind a container port to the host, opening the port for public access. To prevent this, you could bind the port to an internal address (private or 127.0.0.1). Another way is telling Docker to never update iptables by setting the “iptables” option to “false” in /etc/docker/daemon.json. This file should contain a JSON string: “iptables”: false }.

This can be automated as:

apt install -y ufw
ufw default deny incoming
ufw allow OpenSSH
ufw allow http
ufw allow https
ufw --force enable # --force prevents interaction

apt install -y jq
touch /etc/docker/daemon.json
[[ -z $(cat /etc/docker/daemon.json) ]] && echo "{}" > /etc/docker/daemon.json
echo $(jq '.iptables=false' /etc/docker/daemon.json) > /etc/docker/daemon.json

But there are cases when preventing Docker from manipulating iptables can be too much and DNS won’t be resolved to some containers. The issue I met was when building a container from the NGINX Alpine image:

---> Running in 71130dd103f3
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring APKINDEX.b89edf6e.tar.gz: No such file or directory

For containers like this one you could use –network host or manually add iptables rules, which probably is not the best idea because they could change in the future. These are the rules I’ve seen Docker add:

iptables -N DOCKER
iptables -N DOCKER-ISOLATION-STAGE-1
iptables -N DOCKER-ISOLATION-STAGE-2
iptables -A FORWARD -j DOCKER-USER
iptables -A FORWARD -j DOCKER-ISOLATION-STAGE-1
iptables -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -o docker0 -j DOCKER
iptables -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
iptables -A FORWARD -i docker0 -o docker0 -j ACCEPT
iptables -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
iptables -A DOCKER-ISOLATION-STAGE-1 -j RETURN
iptables -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
iptables -A DOCKER-ISOLATION-STAGE-2 -j RETURN
iptables -A DOCKER-USER -j RETURN
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE

I did not specifically need to stop Docker from updating iptables as I was just experimenting, so I just let it do its job. I’m publishing (-p) the ports I need for public access and exposing (- -expose) the private ones.

Protect Docker secrets files

I have a Docker container managed with Docker compose, which defines the unless-stopped restart policy. But my container never starts after I reboot the machine. But it does restart if I just restart the Docker service. I have a similar setup on another machine which I have no issues with.

I kept on searching the issue until it hit me all of a sudden. I’m using secrets read from files. The files are in the /tmp  directory which gets cleared, thus the container fails to start.

When I defined the secrets files, I just threw them away from the source code repository, without thinking too much where. I wasn’t even sure if I was going to use them for a long time.

Database migrations

Database migrations can be easily integrated into your deploy system, running as a decoupled process, so it can be replaced anytime by other tools if needed, and working with it without interfering with the project itself.

The entire process can be isolated into a Docker container or the tools can be all installed directly on your machine. Presented setup is for CentOS.

Let’s assume the following context:

– Machine to run the migrations from (with Docker installed)
– MySQL database with access from the machine mentioned above
– A secrets manager to keep the database access credentials safe
– Git repository holding the migration files included (there will be a directory with all the migration files in the proper format).
– Private SSH key to access the above mentioned repository

Every time you deploy your app, you could run all the migrations you committed to your repository. Your deploy system should trigger the migration tool at the proper moment.

The key in this setup is migrate, a flexible tool which I had no problems with.
As presented in this Dockerfile, there are different tools used to perform each required step:
– Get migration files from the repository
– Get a secret string with database credentials from the secrets manager
– Extract the database credentials from the secret string
– Execute the migrations

Take a look at the full setup on GitHub.

Docker multi-stage builds with Docker Compose

When defining a multi service environment with Docker and Docker Compose, the usual way was to use a Dockerfile for each service, starting with the base image and adding all custom needs:

/env/php/Dockerfile

FROM php:7.2-fpm-alpine3.7

RUN docker-php-ext-install opcache

/env/nginx/Dockerfile

FROM nginx:1.15-alpine

ADD virtual-host.conf /etc/nginx/conf.d/default.conf

Then you could compose all services.

/docker-compose.yml

version: '3'

services:
  php:
    build:
      context: ./env/php
    volumes:
      - ./:/app
    working_dir: /app
    restart: unless-stopped
  nginx:
    build:
      context: ./env/nginx
    volumes:
      - ./:/app
    ports:
      - "80:80"
    restart: unless-stopped

Then Docker 17.05 introduced multi-stage builds, allowing to use one Dockerfile. Continue reading Docker multi-stage builds with Docker Compose

Reloading Go apps automatically while developing

There are some ways to automatically reload Go apps while developing, to not have to manually stop your app, build it, run it. Recently I ran into this article about a nice tool called Fresh.

I wanted to start using live reload (or hot reload) for a project of mine, but it had a particularity which gave me troubles when I tried Fresh. My case was:

  • I had a multiple apps Git repository
  • The apps were sharing some packages
  • If I edited app1 and the packages it used, I didn’t want both app1 and app2 to be restarted, only app1 (similar for app2)

Continue reading Reloading Go apps automatically while developing

Docker container with internet connection but no working DNS server

I’ve met a network setup where common DNS servers like 8.8.8.8 were not working and domain names could not be resolved inside Docker containers.

Find out the DNS servers on your system (I was using Ubuntu 16.04):

# Get the name of the interface you're using to connect to your network
ifconfig

# Then get DNS servers associated to it (I've used the first one in this list)
nmcli device show <interfacename> | grep IP4.DNS | awk '{print $2}'

 

Now you have two options (“x.x.x.x” will be the DNS you chose from above):

  • You can run containers with the dns flag:
    docker run -tid --dns x.x.x.x ubuntu:16.04 bash
  • Or, as I did, you can configure the DNS to be used automatically by all future containers:
    • open /etc/network/interfaces
    • add “dns-nameservers x.x.x.x” after “iface lo inet loopback”
    • restart interfaces: sudo ifdown -a && sudo ifup -a
      sudo sed -i '/iface lo inet loopback/a dns-nameservers x.x.x.x' /etc/network/interfaces
      sudo ifdown -a && sudo ifup -a
      

Isolation

Big fan of isolation here, from code libraries to apps environments.

Code should be organized in reusable and extensible units (package, library, component), isolated from other units, the interaction between them being made on APIs well described by a contract. Also, they should be easily extensible.

Apps should live in isolated environments. I’m talking about servers, virtual machines, and containers. If different services run on the same machine, you can isolate them using containers. As such, you can safely and independently deploy, make upgrades, balance traffic, move over different machines.
If you must work directly in the container (maybe perform some upgrade), and something goes wrong, you just restart the container from the original image.

I remember having some issues upgrading a Python package in an old production environment. The attempted upgrades just crashed. I knew the service could stay offline for a while, but I had to put it back up. As other services were running on that machine, I didn’t want to interfere with them. So I just set up a new environment for the Python app inside a Docker container, installed it from scratch, and it was up again.

Isolation also suites well legacy apps. You can just throw everything inside a container, and never be afraid of moving between machines, interfering with other services, or breaking things.