Database migrations

Database migrations can be easily integrated into your deploy system, running as a decoupled process, so it can be replaced anytime by other tools if needed, and working with it without interfering with the project itself.

The entire process can be isolated into a Docker container or the tools can be all installed directly on your machine. Presented setup is for CentOS.

Let’s assume the following context:

– Machine to run the migrations from (with Docker installed)
– MySQL database with access from the machine mentioned above
– A secrets manager to keep the database access credentials safe
– Git repository holding the migration files included (there will be a directory with all the migration files in the proper format).
– Private SSH key to access the above mentioned repository

Every time you deploy your app, you could run all the migrations you committed to your repository. Your deploy system should trigger the migration tool at the proper moment.

The key in this setup is migrate, a flexible tool which I had no problems with.
As presented in this Dockerfile, there are different tools used to perform each required step:
– Get migration files from the repository
– Get a secret string with database credentials from the secrets manager
– Extract the database credentials from the secret string
– Execute the migrations

Take a look at the full setup on Github.

Docker multi-stage builds with Docker Compose

When defining a multi service environment with Docker and Docker Compose, the usual way was to use a Dockerfile for each service, starting with the base image and adding all custom needs:

/env/php/Dockerfile

FROM php:7.2-fpm-alpine3.7

RUN docker-php-ext-install opcache

/env/nginx/Dockerfile

FROM nginx:1.15-alpine

ADD virtual-host.conf /etc/nginx/conf.d/default.conf

Then you could compose all services.

/docker-compose.yml

version: '3'

services:
  php:
    build:
      context: ./env/php
    volumes:
      - ./:/app
    working_dir: /app
    restart: unless-stopped
  nginx:
    build:
      context: ./env/nginx
    volumes:
      - ./:/app
    ports:
      - "80:80"
    restart: unless-stopped

Then Docker 17.05 introduced multi-stage builds, allowing to use one Dockerfile. Continue reading Docker multi-stage builds with Docker Compose

Reloading Go apps automatically while developing

There are some ways to automatically reload Go apps while developing, to not have to manually stop your app, build it, run it. Recently I ran into this article about a nice tool called Fresh.

I wanted to start using live reload (or hot reload) for a project of mine, but it had a particularity which gave me troubles when I tried Fresh. My case was:

  • I had a multiple apps Git repository
  • The apps were sharing some packages
  • If I edited app1 and the packages it used, I didn’t want both app1 and app2 to be restarted, only app1 (similar for app2)

Continue reading Reloading Go apps automatically while developing

Docker container with internet connection but no working DNS server

I’ve met a network setup where common DNS servers like 8.8.8.8 were not working and domain names could not be resolved inside Docker containers.

Find out the DNS servers on your system (I was using Ubuntu 16.04):

# Get the name of the interface you're using to connect to your network
ifconfig

# Then get DNS servers associated to it (I've used the first one in this list)
nmcli device show <interfacename> | grep IP4.DNS | awk '{print $2}'

 

Now you have two options (“x.x.x.x” will be the DNS you chose from above):

  • You can run containers with the dns flag:
    docker run -tid --dns x.x.x.x ubuntu:16.04 bash
  • Or, as I did, you can configure the DNS to be used automatically by all future containers:
    • open /etc/network/interfaces
    • add “dns-nameservers x.x.x.x” after “iface lo inet loopback”
    • restart interfaces: sudo ifdown -a && sudo ifup -a
      sudo sed -i '/iface lo inet loopback/a dns-nameservers x.x.x.x' /etc/network/interfaces
      sudo ifdown -a && sudo ifup -a
      

Isolation

Big fan of isolation here, from code libraries to apps environments.

Code should be organized in reusable and extensible units (package, library, component), isolated from other units, the interaction between them being made on APIs well described by a contract. Also, they should be easily extensible.

Apps should live in isolated environments. I’m talking about servers, virtual machines, and containers. If different services run on the same machine, you can isolate them using containers. As such, you can safely and independently deploy, make upgrades, balance traffic, move over different machines.
If you must work directly in the container (maybe perform some upgrade), and something goes wrong, you just restart the container from the original image.

I remember having some issues upgrading a Python package in an old production environment. The attempted upgrades just crashed. I knew the service could stay offline for a while, but I had to put it back up. As other services were running on that machine, I didn’t want to interfere with them. So I just set up a new environment for the Python app inside a Docker container, installed it from scratch, and it was up again.

Isolation also suites well legacy apps. You can just throw everything inside a container, and never be afraid of moving between machines, interfering with other services, or breaking things.