Unit testing and interfaces

  • Good code needs tests
  • Tests require good design
  • Good design implies decoupling
  • Interfaces help decouple
  • Decoupling lets you write tests
  • Tests help having good code

Good code and unit testing come hand in hand, and sometimes the bridge between them are interfaces. When you have an interface, you can easily “hide” any implementation behind it, even a mock for a unit test.

An important subject of unit testing is managing external dependencies. The tests should directly cover the unit while using fake replacements (mocks) for the dependencies.

I was given the following code and asked to write tests for it:

package mail

import (
   "fmt"
   "net"
   "net/smtp"
   "strings"
)

func ValidateHost(email string) (err error) {
   mx, err := net.LookupMX(host(email))
   if err != nil {
      return err
   }

   client, err := smtp.Dial(fmt.Sprintf("%s:%d", mx[0].Host, 25))
   if err != nil {
      return err
   }

   defer func() {
      if er := client.Close(); er != nil {
         err = er
      }
   }()

   if err = client.Hello("checkmail.me"); err != nil {
      return err
   }
   if err = client.Mail("testing-email-host@gmail.com"); err != nil {
      return err
   }
   return client.Rcpt(email)
}

func host(email string) (host string) {
   i := strings.LastIndexByte(email, '@')
   return email[i+1:]
}

The first steps were to identify test cases and dependencies: Continue reading Unit testing and interfaces

PostgreSQL batch operations in Go

Consider the following case: When creating a user (database insert) with their profile (another insert), other users must be updated (database update) with a new score value. Score is just a float for which a dummy formula will be used. And then an action record is needed (insert), which marks the fact that a user was created.

The tech context is PostgreSQL in Go with pgx as database driver and Echo framework for the HTTP server. The database setup is straight forward using Docker; it also includes a database management interface which will be available at http://localhost:54321. If you clone the sample repository, and start the setup with Docker Compose (docker compose up -d), when the PostgreSQL Docker container is built, a database is created with the schema used in this post.

CREATE TABLE "users" (
  "id" serial NOT NULL,
  "username" CHARACTER VARYING (100) NOT NULL,
  "score" DECIMAL NOT NULL DEFAULT 0,
  "created" TIMESTAMP(0) WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
  "updated" TIMESTAMP(0) WITH TIME ZONE
);

CREATE TABLE "user_profile" (
  "user_id" INTEGER NOT NULL,
  "firstname" CHARACTER VARYING (100) NOT NULL,
  "lastname" CHARACTER VARYING (100) NOT NULL
);

CREATE TABLE "actions" (
  "id" serial NOT NULL,
  "description" text NOT NULL,
  "created" TIMESTAMP(0) WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
);

Data integrity is of interest, so all the queries will be sent on a database transaction. And because there are multiple user update queries, they will be sent all at the same time in a batch of operations. Continue reading PostgreSQL batch operations in Go

Unmarshal JSON and XML into the same Go structure slice with proper data type

I’m consuming a REST service which gives various lists in both JSON and XML formats, something similar to these ones:

[  
   {  
      "id":803267,
      "name":"Paris, Ile-de-France, France",
      "region":"Ile-de-France",
      "country":"France",
      "is_day":1,
      "localtime":"2018-05-12 12:53"
   },
   {  
      "id":760995,
      "name":"Batignolles, Ile-de-France, France",
      "region":"Ile-de-France",
      "country":"France",
      "is_day":0,
      "localtime":"2018-05-12"
   }
]
<?xml version="1.0" encoding="UTF-8"?>
<root>
   <geo>
      <id>803267</id>
      <name>Paris, Ile-de-France, France</name>
      <region>Ile-de-France</region>
      <country>France</country>
      <is_day>1</is_day>
     <localtime>2018-05-12 12:53</localtime>
   </geo>
   <geo>
      <id>760995</id>
      <name>Batignolles, Ile-de-France, France</name>
      <region>Ile-de-France</region>
      <country>France</country>
      <is_day>0</is_day>
      <localtime>2018-05-12</localtime>
   </geo>
</root>

And I wanted to get them into a slice of this type of structures:

type Locations []Location

type Location struct {
   ID        int      `json:"id" xml:"id"`
   Name      string   `json:"name" xml:"name"`
   Region    string   `json:"region" xml:"region"`
   Country   string   `json:"country" xml:"country"`
   IsDay     int      `json:"is_day" xml:"is_day"`
   LocalTime string   `json:"localtime" xml:"localtime"`
}

It’s straight forward for JSON: Continue reading Unmarshal JSON and XML into the same Go structure slice with proper data type

API authorization through middlewares

When dealing with API authorization based on access tokens, permissions (user can see a list, can create an item, can delete one etc) and/or account types (administrator, moderator, normal user etc), I’ve seen the approach of checking requirements inside the HTTP handlers functions.

This post and attached source code do not handle security, API design, data storage patterns or any other best practices that do not aim directly at the main subject: Authorization through middlewares. All other code is just for illustrating the idea as a whole.

A classical way of dealing various authorization checks is to verify everything inside the handler function.

type User struct {
   Username    string
   Type        string
   Permissions uint8
}

var CanDoAction uint8 = 1

func tokenIsValid(token string) bool {
   // ...
   return true
}

func getUserHandler(c echo.Context) error {
   // Check authorization token
   token := c.Get("token").(string)
   if !tokenIsValid(token) {
      return c.NoContent(http.StatusUnauthorized)
   }

   user := c.Get("user").(User)

   // Check account type
   if user.Type != "admin" {
      return c.NoContent(http.StatusForbidden)
   }

   // Check permission for handler
   if user.Permissions&CanDoAction == 0 {
      return c.NoContent(http.StatusForbidden)
   }

   // Get data and send it as response
   data := struct {
      Username string `json:"username"`
   }{
      Username: user.Username,
   }

   return c.JSON(http.StatusOK, data)
}

The handler is doing more than its purpose. Continue reading API authorization through middlewares

Race condition on Echo context with GraphQL in Go

Given an API setup with GraphQL and Echo, a colleague ran into a race condition situation. There was a concurrent read/write issue on Echo’s context. GraphQL runs its resolvers in parallel if set so, and when context is shared between resolvers, things can go wrong.

I took a look into Echo’s context implementation and I saw a simple map is used for Get/Set.

For every API call, a handle functions is given an Echo context and executes the GraphQL schema with the specified context.

func handle(c echo.Context) error {
 schema, err := gqlgo.ParseSchema(
  ...
  gqlgo.MaxParallelism(10),
 )

 schema.Exec(
  c,
  ...
 )
}

My solution was to use a custom context which embeds the original one and uses a concurrent map instead of Echo’s. Continue reading Race condition on Echo context with GraphQL in Go

Read Go documentation

A few weeks ago I’ve finished reading the Go documentation. The Go 1.10 documentation Android app is very helpful! It’s very easy to read and it has auto bookmarks; whenever you get back into the app, you can return to the chapter you were reading, at the same line you were before closing it.

It was very nice to find out a lot of details. Of course, I don’t remember all of them after just one reading, but when I bump into some situations (performance, how slices really work, libraries, the memory model etc) it’s easier to start researching further if I don’t remember the point exactly.

I don’t want to write Go without understanding as best as I can its way of getting things done.

PHP backward compatibility

Could PHP 7 inherit Go’s philosophy of not introducing breaking changes in future versions? I’ve asked this myself today.

PHP is on a really nice path since version 7, with its courageous changes, and I believe more is yet to come. Because things must evolve, there were some incompatible changes from 5 to 7, and that’s great. And now that it’s more mature, I would like to see more stability and more of those hardcore changes that will keep the language in its good position.

Regarding this, something that I really wish for PHP 7 is to approach Go’s idea of not introducing incompatibilities with future versions. The programs should run flawless when upgrading PHP. It would be a great encouragement for everyone to safely upgrade as soon as a new version is released.

Configurable implementation hidden behind a contract

Some concrete implementations are better to be hidden behind contracts, in a specific package. A layer of abstraction can save you later, mostly on unstable projects that often change requirements, or maybe you just want to test an idea, maybe you don’t have the necessary time to finish a task in a way you’d like to.

A good contract will save you. You can “throw” your implementation behind it and come back later to refine it. Sometimes later means in a few years. But if you’re behind a well designed contract, most probably you’re going to alter only the concrete implementation, without touching large areas of the projects.

I had to filter some user input. Some strings had to be HTML escaped, some sanitized to prevent different attacks. I’ve wrapped everything into a package, behind a contract. For escaping I’ve used Go’s html.EscapeString function, while for sanitizing I’ve found the bluemonday package, which is inspired by an OWASP sanitizier for Java. Continue reading Configurable implementation hidden behind a contract

Using channels to control flow

When I first met channels in Go, I thought they were just data containers. When you want to send data between routines, they are the go to. But later I saw they can be used to control an application flow, like waiting until something happens and only after that go on with the normal flow.

Here’s an approach to wait until some tasks have finished. Let me consider some input data that can come from anywhere: HTTP requests, files, sockets etc. That data can come in fast, but processing can take some time. If the application needs to exit (you restart it, stop it) but you want to wait until all the data you have in memory is processed, you can use channels.

First, I have a routine which processes input data read from a buffered channel. I simulate the slow processing with a sleep. Continue reading Using channels to control flow

Converting seconds from float to int

Another small one on time precision that I’ve noticed.

package main

import (
	"fmt"
	"math"
	"time"
)

func main() {
	now := time.Now().Add(time.Hour)
	seconds = time.Now().Sub(now).Seconds()
	fmt.Println(int(seconds))
	fmt.Println(int(math.Floor(seconds)))
}

Line 12 will print -3599, line 13 will print -3600 (tested on Ubuntu). So watch out when converting the number of seconds to an integer, you might not always get what you need.