GoogleTest parameterized tests with JSON input

This is a full setup of parameterized tests (table-driven tests) with the GoogleTest framework. It includes using JSON test input from a file. If you’d like to skip the “story” and get to the code, you can download the CMake example project.

I assume you know what unit testing is and that tests are as important as the production code (or more important sometimes).

There are several ways to organize tests and their data (provided input and expected output). I’m testing the very simple sum function of a calculator because my focus here is on the tests, not on the tested code. Everything can be applied in more advanced contexts.

// calculator.hpp

#ifndef CALCULATOR
#define CALCULATOR

namespace calculator {

inline int sum(int a, int b) { return a + b; }

}  // namespace calculator

#endif  // CALCULATOR

Simple tests

The simplest test would call the function with some arguments and verify the result. I can create multiple similar tests for specific cases (eg. overflow). It’s the form I’m trying to use as often as possible. I like stupidly simple tests that are extremely easy to understand.

#include <gtest/gtest.h>

#include "calculator.hpp"

TEST(CalculatorSimpleTest, Sum)
{
    const auto actual = calculator::sum(1, 2);
    const auto expected = 3;
    EXPECT_EQ(actual, expected);
}

Table-driven tests

It’s a method I typically use when multiple simple tests (as above) would repeat. I can easily add multiple test cases in a configurable way; rather than thinking about the test code, I’m focusing on the test scenarios.

I always encourage scenario-based tests. What is more important than how. I construct the tests starting with the scenarios I want to cover (basic ones, edge cases), not thinking about lines of code to be covered. Although very important, the coverage should be a result, not a scope. The most important thing is for the source code to work as I promised to the user through the public interface.

If there are special scenarios that I want to be clearly stated by the tests, I add simple tests focused on particular cases besides the table-driven ones that cover the base cases.

The table is a container where each element is a test case.

#include <gtest/gtest.h>

#include <vector>

#include "calculator.hpp"

TEST(CalculatorTableDrivenTest, Sum)
{
    struct Test {
        int a;
        int b;
        int sum;
    };

    const std::vector<Test> tests{{
        {1, 2, 3},
        {4, 5, 9},
    }};

    for (const auto& test : tests) {
        const auto actual = calculator::sum(test.a, test.b);
        const auto expected = test.sum;
        EXPECT_EQ(actual, expected);
    }
}

In case of failure, I can provide details to identify faster what case failed. It’s more useful when I have many test cases. I can print the case number or I can add a description. Continue reading GoogleTest parameterized tests with JSON input

Dev testing

A user story is not done after the code has been written. Also, the developer’s part is not done after the code has been written. I strongly believe in dev testing: after implementing, the developer will once again read each line of the requirements (they had already done this at least once before implementing, right?!?!) and manually test the feature (even if they’ve written automated tests). At least a sanity test to make sure the QA colleagues will not get back to them after a few minutes of testing or every few minutes with another issue.

The interaction with the QA colleagues should be more in breaks. I want to see those people get bored like hell because they just need to check the implementation against the requirements and get back to chatting because everything is OK. If they get back with special situations, corner cases, weird exceptions, everyone’s doing a great job.

We’re human, we forget things while developing, it’s OK. But if the number of things is too big to fit in memory and it increases with every feature, maybe we should read the requirements a few more times and test each one of them. And we should ask questions. The code is done after we understand, implement, test.

And always ask ourselves what happens to the other parts of the app if we do just a small change in the old code. Then test. Yes, we, the developers, must test!

Test multiple PHP exceptions

There are cases when you have a base exception class which is extended by other exceptions. For a package called User, you could have a UserException which is inherited by UserNotFoundException, UserPasswordException and so on.

The base exception of your package allows you to catch any thrown exception from inside the package.

So there’s a base exception, a specific one and the service using them:

class BaseException extends Exception
{
}

class NoItemsException extends BaseException
{
}

class Service
{
    /**
     * @param array $items
     * @throws NoItemsException
     */
    public function save(array $items)
    {
        if (count($items) === 0) {
            throw new NoItemsException('Nothing to save');
        }
    }
}

And you want to catch all package exceptions:

$service = new Service();
try {
    $service->save([]);
} catch (BaseException $e) {
    die($e->getMessage());
}

And a test for the case:

class ServiceTest extends TestCase
{
    /**
     * @expectedException NoItemsException
     */
    public function testSaveWithNoItems()
    {
        $service = new Service();
        $service->save([]);
    }
}

How do you make sure NoItemsException extends BaseException?
With the above test, if you change NoItemsException to extend Exception instead of BaseException, the test will still pass, but the behaviour won’t be the expected one, because you won’t be catching the exception anymore:

Fatal error: Uncaught NoItemsException: Nothing to save... in /src/Service.php on line 20

NoItemsException: Nothing to save... in /src/Service.php on line 20

Your test must explicitly test both exceptions:

class ServiceTest extends TestCase
{
    public function testSaveWithNoItems()
    {
        $service = new Service();

        try {
            $service->save([]);
            $this->fail('No exception thrown');
        } catch (\Exception $e) {
            $this->assertInstanceOf(BaseException::class, $e);
            $this->assertInstanceOf(NoItemsException::class, $e);
        }
    }
}

PHP unit testing with real coverage

If you really need to cover all your code by tests, watch out for your short if statements.

Given the following class:

<?php

class Person
{
    /**
     * @var string
     */
    private $gender;

    /**
     * @param string $gender
     */
    public function setGender(string $gender)
    {
        $this->gender = $gender;
    }

    /**
     * @return string
     */
    public function getTitle() : string
    {
        return $this->gender === 'f' ? 'Mrs.' : 'Mr.';
    }
}

And a PHP Unit test:

<?php

use PHPUnit\Framework\TestCase;

class PersonTest extends TestCase
{
    /**
     * @dataProvider gendersAndTitle
     * @param $gender
     * @param $expectedTitle
     */
    public function testTitle($gender, $expectedTitle)
    {
        $person = new Person();
        $person->setGender($gender);

        $title = $person->getTitle();
        $this->assertEquals($expectedTitle, $title);
    }

    public function gendersAndTitle() : array
    {
        return [
            ['f', 'Mrs.'],
        ];
    }
}

If you run the test with coverage, you get a 100% coverage. But the data provider has only data for the “f/Mrs.” case, so the else branch of the short if is not actually tested, though the tested code reached the line while running the test.

Update the getTitle method from Person class using the normal if statement:

public function getTitle() : string
{
    if ($this->gender === 'f') {
        return 'Mrs.';
    }

    return 'Mr.';
}

Execute the test again and you get 80% coverage.

Here’s a Dockerfile to quickly test it yourself:

FROM php:7.2-cli-alpine3.8

RUN apk add --update --no-cache make alpine-sdk autoconf && \
    pecl install xdebug && \
    docker-php-ext-enable xdebug && \
    apk del alpine-sdk autoconf && \
    wget -O phpunit https://phar.phpunit.de/phpunit-6.phar && chmod +x phpunit

WORKDIR /src

Save the Person class to Person.php file and the test to PersonTest.php.

docker build -t phpunit-coverage .
docker run --rm -ti -v $PWD:/src phpunit-coverage sh

./phpunit --bootstrap Person.php --coverage-html coverage --whitelist Person.php .

See the coverage directory (index.html) created after running the test.

Clean up when you’re done:

docker rmi phpunit-coverage

Unit testing and interfaces

  • Good code needs tests
  • Tests require good design
  • Good design implies decoupling
  • Interfaces help decouple
  • Decoupling lets you write tests
  • Tests help having good code

Good code and unit testing come hand in hand, and sometimes the bridge between them are interfaces. When you have an interface, you can easily “hide” any implementation behind it, even a mock for a unit test.

An important subject of unit testing is managing external dependencies. The tests should directly cover the unit while using fake replacements (mocks) for the dependencies.

I was given the following code and asked to write tests for it:

package mail

import (
   "fmt"
   "net"
   "net/smtp"
   "strings"
)

func ValidateHost(email string) (err error) {
   mx, err := net.LookupMX(host(email))
   if err != nil {
      return err
   }

   client, err := smtp.Dial(fmt.Sprintf("%s:%d", mx[0].Host, 25))
   if err != nil {
      return err
   }

   defer func() {
      if er := client.Close(); er != nil {
         err = er
      }
   }()

   if err = client.Hello("checkmail.me"); err != nil {
      return err
   }
   if err = client.Mail("testing-email-host@gmail.com"); err != nil {
      return err
   }
   return client.Rcpt(email)
}

func host(email string) (host string) {
   i := strings.LastIndexByte(email, '@')
   return email[i+1:]
}

The first steps were to identify test cases and dependencies: Continue reading Unit testing and interfaces