Error handling in Echo framework

If misused, error handling and logging in Go can become too verbose and tangled. If the two are decoupled, you can just pass the error forward (maybe add some context to it if necessary, or have specific error structs) and have it logged or sent to various systems (like New Relic) somewhere else in the code, not where you receive it from a function call.

One of the features I appreciate the most in Echo framework is the HTTPErrorHandler which can be customized to any needs. And combined with the recover middleware, error management becomes an easy task.

package main

import (
   "errors"
   "fmt"
   "net/http"

   "github.com/labstack/echo/v4"
   "github.com/labstack/echo/v4/middleware"
   "github.com/labstack/gommon/log"
)

func main() {
   server := echo.New()
   server.Use(
      middleware.Recover(),   // Recover from all panics to always have your server up
      middleware.Logger(),    // Log everything to stdout
      middleware.RequestID(), // Generate a request id on the HTTP response headers for identification
   )
   server.Debug = false
   server.HideBanner = true
   server.HTTPErrorHandler = func(err error, c echo.Context) {
      // Take required information from error and context and send it to a service like New Relic
      fmt.Println(c.Path(), c.QueryParams(), err.Error())

      // Call the default handler to return the HTTP response
      server.DefaultHTTPErrorHandler(err, c)
   }

   server.GET("/users", func(c echo.Context) error {
      users, err := dbGetUsers()
      if err != nil {
         return err
      }

      return c.JSON(http.StatusOK, users)
   })

   server.GET("/posts", func(c echo.Context) error {
      posts, err := dbGetPosts()
      if err != nil {
         return err
      }

      return c.JSON(http.StatusOK, posts)
   })

   log.Fatal(server.Start(":8088"))
}

func dbGetUsers() ([]string, error) {
   return nil, errors.New("database error")
}

func dbGetPosts() ([]string, error) {
   panic("an unhandled error occurred")
   return nil, nil
}

Avoid logging in functions:

server.GET("/users", func(c echo.Context) error {
   users, err := dbGetUsers()
   if err != nil {
      log.Errorf("cannot get users: %s", err)
      return err
   }

   return c.JSON(http.StatusOK, users)
})

Keep the handler functions clean, check your errors and pass them forward.

Golang error handling can be clean if you handle it as a decoupled component instead of mixing it with the main logic.

Automated Jenkins CI setup for Go projects

I was helping a friend on a project with some tasks amongst which code quality. I’ve set up some tools (gometalinter first, then golangci-lint) and integrated them with Travis CI.

Then GitHub decided to give free private repositories and my friend made the project private. Travis and other CI systems require paid plans for private repositories so I decided to set up a Jenkins environment. I’ve used Jenkins before, but never configured it myself. I thought it’s going to be a quick click-click install process, but I’ve found myself in front of various plugins, configurations, credentials and all sort of requirements.

One of the first things that come in my mind when I have to do something is if I’ll have to do it again later and if I should automate the process. And that’s how a pet project was born. My purpose was to write a configuration file, install the environment, then configure jobs for the repositories I need.

I’ve created two job templates, one for triggering a build on branch push, one for pull requests (required webhooks are created automatically). Based on the templates, I just wanted to create a new job, insert the GitHub repository url and start using the job.

The job templates are aiming at Go projects to run tests and code quality tools on (all required tools are installed automatically). Other configured actions are automatic backups and cleanups. And you’ll also find some setup scripts for basic server security and requirements.

There are things which could be improved, but now I have a click-click CI system setup called Go Jenkins CI.

 

 

Protect Docker secrets files

I have a Docker container managed with Docker compose, which defines the unless-stopped restart policy. But my container never starts after I reboot the machine. But it does restart if I just restart the Docker service. I have a similar setup on another machine which I have no issues with.

I kept on searching the issue until it hit me all of a sudden. I’m using secrets read from files. The files are in the /tmp  directory which gets cleared, thus the container fails to start.

When I defined the secrets files, I just threw them away from the source code repository, without thinking too much where. I wasn’t even sure if I was going to use them for a long time.

Dev testing

A user story is not done after the code has been written. Also, the developer’s part is not done after the code has been written. I strongly believe in dev testing: after implementing, the developer will once again read each line of the requirements (they had already done this at least once before implementing, right?!?!) and manually test the feature (even if they’ve written automated tests). At least a sanity test to make sure the QA colleagues will not get back to them after a few minutes of testing or every few minutes with another issue.

The interaction with the QA colleagues should be more in breaks. I want to see those people get bored like hell because they just need to check the implementation against the requirements and get back to chatting because everything is OK. If they get back with special situations, corner cases, weird exceptions, everyone’s doing a great job.

We’re human, we forget things while developing, it’s OK. But if the number of things is too big to fit in memory and it increases with every feature, maybe we should read the requirements a few more times and test each one of them. And we should ask questions. The code is done after we understand, implement, test.

And always ask ourselves what happens to the other parts of the app if we do just a small change in the old code. Then test. Yes, we, the developers, must test!

The mid-language crisis

A while after I wrote the Apixu libraries for Go and PHP, they asked me if I could help them with some issue a client had when trying to install the Python library. I quickly tested and sent them steps on how to install all requirements. And I noticed the library could use some additions.

My experience with Python was of a few lines and I felt like it would be a good context to write a few more, to understand the language a little better, to learn about its requirements, ways to set up a library and write some tests to validate JSON schemas. A clean language with a simple package manager and clear error handling. Basic integrations with Flask and Django were pretty smooth.

One thing I like about Apixu is they have multiple libraries and after freshening the Python one I was inspired to do more. And there was where to choose from.

JavaScript is an old friend but we met only in the browser some years ago. It was time to face its server side of the moon and add some touches to the NodeJS library. I’m not a huge fan of callbacks, but Promises are indeed a nice way to handle responses. Express was easy to start with, I like its micro framework feeling. The package manager, npm, doesn’t seem too far from PHP’s composer, we got along well. Continue reading The mid-language crisis

Encrypt column in PostgreSQL

PostgreSQL has a nice encryption (and hashing) module called pgcrypto which is easy to use. Using a provided key, you can quickly encrypt a column which contains sensitive information.

While the setup is fast and the usage is simple, there could be some disadvantages in some contexts:

  • be careful how you send the encryption key to the database server (if public, use SSL for transport, else keep it in a private network at least)
  • data is encrypted/decrypted in the database, so the transport is in plain (watch out for memory dump attacks)
  • some queries are very slow, as the decrypt operation is performed on the entire table if you want to sort or filter by encrypted columns

Continue reading Encrypt column in PostgreSQL

PostgreSQL NOTNULL vs IS NOT NULL vs NOT ISNULL

Today I’ve found out a flavor of PostgreSQL when checking if a composite type column is not null. There are different ways of checking for not null columns:

  • NOTNULL
  • IS NOT NULL
  • NOT ISNULL

For primitive values any of the above works, while for the composite types there is a particularity.

Given a table of users with a deleted column of a composite type action, and some users marked as deleted, I need all users that are deleted. Continue reading PostgreSQL NOTNULL vs IS NOT NULL vs NOT ISNULL

Bitbucket to GitHub import tool

GitHub now offers free private repositories. For those wanting to transfer their private repositories from Bitbucket to GitHub, I’ve written a very basic tool to help you with this.

I’ve tested it only with some git repositories, most probably there are cases I didn’t reach, so adjustments could be needed.

Check it on Bit… GitHub.

You can also import repositories manually if you wish.

Quickly upgrade Go to latest stable version

#!/usr/bin/env bash

installpath="/usr/local"

if [[ `whoami` != "root" ]]; then
    echo run as root
    exit 1
fi

if [[ `which jq` == "" ]]; then
    apt update && apt install -y jq
fi

if [[ `which curl` == "" ]]; then
    apt update && apt install -y curl
fi

check=`curl https://golang.org/dl/?mode=json`

stable=`echo $check | jq -r '.[0].stable'`
if [[ "$stable" != "true" ]]; then
    exit 0
fi

newversion=`echo $check | jq -r '.[0].version'`
currentversion=`$installpath/go/bin/go version 2> /dev/null`

if [[ "$currentversion" == *"$newversion"* ]]; then
    exit 0
fi

cd /tmp
file=$newversion.linux-amd64.tar.gz
`curl https://dl.google.com/go/$file > $file`
`rm -rf $installpath/go/`
`tar -C $installpath -xzf $file`
`rm $file`

Trim string starting suffix

Today I wanted to remove, from a string, the substring starting with a specified string.

For a line of code with a comment started with #, I wanted to remove everything starting at #, so the line “line with #comm#ent” should become “line with “.

The test for this is:

func TestRemoveFromStringAfter(t *testing.T) {
   tests := []struct {
      input,
      after,
      expected string
   }{
      {
         input:    "line with #comm#ent",
         after:    "#",
         expected: "line with ",
      },
      {
         input:    "line to clean",
         after:    "abc",
         expected: "line to clean",
      },
      {
         input:    "line to clean",
         after:    "l",
         expected: "",
      },
      {
         input:    "",
         after:    "",
         expected: "",
      },
      {
         input:    " ",
         after:    "",
         expected: " ",
      },
   }

   for i, test := range tests {
      result := RemoveFromStringAfter(test.input, test.after)
      if result != test.expected {
         t.Fatalf("Failed at test: %d", i)
      }
   }
}

I tried to use TrimSuffix and TrimFunc from the strings package, but they weren’t getting me where I wanted. Then, all of a sudden, it stroke me: a string can be treated as a slice and a subslice is what I need. A subslice which ends right before the position of the suffix I give.

So I take the position of the suffix and extract a substring of the input string:

func RemoveFromStringAfter(input, after string) string {
   if after == "" {
      return input
   }

   if index := strings.Index(input, after); index > -1 {
      input = input[:index]
   }

   return input
}