QA tools for Go apps in CI/CD

Go Meta Linter is a great tool to run code quality checks including vet, static analysis, security, linting and others. I’ve used it a few times, enjoyed it, and I’ve built a basic setup to include it into CI/CD, along with the unit tests execution.

All you need is Docker and a Docker images repository like Docker Hub. You’ll build an image to run the tools in, push it to your repository, then pull it on your CI/CD machine and run a container from it, as simply as:

docker run -ti --rm \
    -e PKG=github.com/andreiavrammsd/dotenv-editor \
    -e CONFIG=dev/.gometalinter.json \
    -v $PWD:/app \
    yourdockerusername/go-qa-tools \
    make

(later edit: I archived the example package)

Of course, it can be integrated into a service like Travis:

sudo: required

language: minimal

install:
- docker pull andreiavrammsd/go-qa-tools

script:
- docker run -ti -e PKG=github.com/andreiavrammsd/dotenv-editor -e CONFIG=dev/.gometalinter.json -v $PWD:/app andreiavrammsd/go-qa-tools make

See the full Go QA tools setup on GitHub.

Database migrations

Database migrations can be easily integrated into your deploy system, running as a decoupled process, so it can be replaced anytime by other tools if needed, and working with it without interfering with the project itself.

The entire process can be isolated into a Docker container or the tools can be all installed directly on your machine. Presented setup is for CentOS.

Let’s assume the following context:

– Machine to run the migrations from (with Docker installed)
– MySQL database with access from the machine mentioned above
– A secrets manager to keep the database access credentials safe
– Git repository holding the migration files included (there will be a directory with all the migration files in the proper format).
– Private SSH key to access the above mentioned repository

Every time you deploy your app, you could run all the migrations you committed to your repository. Your deploy system should trigger the migration tool at the proper moment.

The key in this setup is migrate, a flexible tool which I had no problems with.
As presented in this Dockerfile, there are different tools used to perform each required step:
– Get migration files from the repository
– Get a secret string with database credentials from the secrets manager
– Extract the database credentials from the secret string
– Execute the migrations

Take a look at the full setup on GitHub.

PHP unit testing with real coverage

If you really need to cover all your code by tests, watch out for your short if statements.

Given the following class:

<?php

class Person
{
    /**
     * @var string
     */
    private $gender;

    /**
     * @param string $gender
     */
    public function setGender(string $gender)
    {
        $this->gender = $gender;
    }

    /**
     * @return string
     */
    public function getTitle() : string
    {
        return $this->gender === 'f' ? 'Mrs.' : 'Mr.';
    }
}

And a PHP Unit test:

<?php

use PHPUnit\Framework\TestCase;

class PersonTest extends TestCase
{
    /**
     * @dataProvider gendersAndTitle
     * @param $gender
     * @param $expectedTitle
     */
    public function testTitle($gender, $expectedTitle)
    {
        $person = new Person();
        $person->setGender($gender);

        $title = $person->getTitle();
        $this->assertEquals($expectedTitle, $title);
    }

    public function gendersAndTitle() : array
    {
        return [
            ['f', 'Mrs.'],
        ];
    }
}

If you run the test with coverage, you get a 100% coverage. But the data provider has only data for the “f/Mrs.” case, so the else branch of the short if is not actually tested, though the tested code reached the line while running the test.

Update the getTitle method from Person class using the normal if statement:

public function getTitle() : string
{
    if ($this->gender === 'f') {
        return 'Mrs.';
    }

    return 'Mr.';
}

Execute the test again and you get 80% coverage.

Here’s a Dockerfile to quickly test it yourself:

FROM php:7.2-cli-alpine3.8

RUN apk add --update --no-cache make alpine-sdk autoconf && \
    pecl install xdebug && \
    docker-php-ext-enable xdebug && \
    apk del alpine-sdk autoconf && \
    wget -O phpunit https://phar.phpunit.de/phpunit-6.phar && chmod +x phpunit

WORKDIR /src

Save the Person class to Person.php file and the test to PersonTest.php.

docker build -t phpunit-coverage .
docker run --rm -ti -v $PWD:/src phpunit-coverage sh

./phpunit --bootstrap Person.php --coverage-html coverage --whitelist Person.php .

See the coverage directory (index.html) created after running the test.

Clean up when you’re done:

docker rmi phpunit-coverage

Match sorted and unsorted integers

I was wondering if there’s a performance difference between matching the integers from two slices, once if the numbers are sorted and once if they’re not. I didn’t stress the hell out of the situation, I went up to 10k numbers.

For small sets, of course, the difference is not worth mentioning. For large slices, if you really, really focus on performance, you could be better with sorted values, if the values are already sorted; if you sort them each time, the loss will be there.

var a = []int{ ... }
var b = []int{ ... }

func IterateNotSorted() int {
   count := 0
   for _, i := range a {
      for _, j := range b {
         if i == j {
            count++
            break
         }
      }
   }

   return count
}

var c = []int{ ... }
var d = []int{ ... }

func IterateSorted() int {
   count := 0
   for _, i := range c {
      for _, j := range d {
         if i == j {
            count++
            break
         }
      }
   }

   return count
}

Fill in the slices with some numbers and test it yourself.

func BenchmarkIterateNotSorted(b *testing.B) {
   for n := 0; n < b.N; n++ {
      IterateNotSorted()
   }
}

func BenchmarkIterateSorted(b *testing.B) {
   for n := 0; n < b.N; n++ {
      IterateSorted()
   }
}

 

Docker multi-stage builds with Docker Compose

When defining a multi service environment with Docker and Docker Compose, the usual way was to use a Dockerfile for each service, starting with the base image and adding all custom needs:

/env/php/Dockerfile

FROM php:7.2-fpm-alpine3.7

RUN docker-php-ext-install opcache

/env/nginx/Dockerfile

FROM nginx:1.15-alpine

ADD virtual-host.conf /etc/nginx/conf.d/default.conf

Then you could compose all services.

/docker-compose.yml

version: '3'

services:
  php:
    build:
      context: ./env/php
    volumes:
      - ./:/app
    working_dir: /app
    restart: unless-stopped
  nginx:
    build:
      context: ./env/nginx
    volumes:
      - ./:/app
    ports:
      - "80:80"
    restart: unless-stopped

Then Docker 17.05 introduced multi-stage builds, allowing to use one Dockerfile. Continue reading Docker multi-stage builds with Docker Compose

PHP performance increase from 5.6 to 7.2

I remember when PHP 7 was released. The first thing I did was a simple performance test. I don’t remember the script exactly, but it was similar to this:

<?php

$time = microtime(true);

$array = [];
for ($i = 0; $i < 10000; $i++) {
    if (!array_key_exists($i, $array)) {
        $array[$i] = [];
    }
    
    for ($j = 0; $j < 1000; $j++) {
        if (!array_key_exists($j, $array[$i])) {
            $array[$i][$j] = true;
        }
    }
}

echo sprintf(
    "Execution time: %f seconds\nMemory usage: %f MB\n\n",
    microtime(true) - $time,
    memory_get_usage(true) / 1024 / 1024
);

The results made me really happy. Continue reading PHP performance increase from 5.6 to 7.2

Apixu Go: A Golang package for Apixu weather service

Not long ago I’ve mentioned Apixu in a post about handling errors. I found out about this service on DevForum, a development discussions platform I visit daily. What I like the most about Apixu is that they have various language libraries for consuming their API. Not great libraries and not all of them are complete, but they try to offer as many variations as they can for their service.

I noticed they were missing a Go library and I was missing an idea to learn new things on. And I just started writing the code until it got to a full package that covers all API methods with error handling, both JSON and XML formats, unit tested, versioned.

It has a simple interface that clearly defines the API methods with their input parameters and responses. And it can be extended for custom needs.

Some important things I learned from the process are simplicity, segregation and isolation, specific errors, memory management, and creating custom marshalers.

Check it out on Github. See documentation for the package and for the API.

Report for github.com/andreiavrammsd/apixu-go GoDoc for apixu-go

In the end, they adopted my package among their official ones.

Unit testing and interfaces

  • Good code needs tests
  • Tests require good design
  • Good design implies decoupling
  • Interfaces help decouple
  • Decoupling lets you write tests
  • Tests help having good code

Good code and unit testing come hand in hand, and sometimes the bridge between them are interfaces. When you have an interface, you can easily “hide” any implementation behind it, even a mock for a unit test.

An important subject of unit testing is managing external dependencies. The tests should directly cover the unit while using fake replacements (mocks) for the dependencies.

I was given the following code and asked to write tests for it:

package mail

import (
   "fmt"
   "net"
   "net/smtp"
   "strings"
)

func ValidateHost(email string) (err error) {
   mx, err := net.LookupMX(host(email))
   if err != nil {
      return err
   }

   client, err := smtp.Dial(fmt.Sprintf("%s:%d", mx[0].Host, 25))
   if err != nil {
      return err
   }

   defer func() {
      if er := client.Close(); er != nil {
         err = er
      }
   }()

   if err = client.Hello("checkmail.me"); err != nil {
      return err
   }
   if err = client.Mail("testing-email-host@gmail.com"); err != nil {
      return err
   }
   return client.Rcpt(email)
}

func host(email string) (host string) {
   i := strings.LastIndexByte(email, '@')
   return email[i+1:]
}

The first steps were to identify test cases and dependencies: Continue reading Unit testing and interfaces

PostgreSQL batch operations in Go

Consider the following case: When creating a user (database insert) with their profile (another insert), other users must be updated (database update) with a new score value. Score is just a float for which a dummy formula will be used. And then an action record is needed (insert), which marks the fact that a user was created.

The tech context is PostgreSQL in Go with pgx as database driver and Echo framework for the HTTP server. The database setup is straight forward using Docker; it also includes a database management interface which will be available at http://localhost:54321. If you clone the sample repository, and start the setup with Docker Compose (docker compose up -d), when the PostgreSQL Docker container is built, a database is created with the schema used in this post.

CREATE TABLE "users" (
  "id" serial NOT NULL,
  "username" CHARACTER VARYING (100) NOT NULL,
  "score" DECIMAL NOT NULL DEFAULT 0,
  "created" TIMESTAMP(0) WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
  "updated" TIMESTAMP(0) WITH TIME ZONE
);

CREATE TABLE "user_profile" (
  "user_id" INTEGER NOT NULL,
  "firstname" CHARACTER VARYING (100) NOT NULL,
  "lastname" CHARACTER VARYING (100) NOT NULL
);

CREATE TABLE "actions" (
  "id" serial NOT NULL,
  "description" text NOT NULL,
  "created" TIMESTAMP(0) WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
);

Data integrity is of interest, so all the queries will be sent on a database transaction. And because there are multiple user update queries, they will be sent all at the same time in a batch of operations. Continue reading PostgreSQL batch operations in Go