GoogleTest parameterized tests with JSON input

This is a full setup of parameterized tests (table-driven tests) with the GoogleTest framework. It includes using JSON test input from a file. If you’d like to skip the “story” and get to the code, you can download the CMake example project.

I assume you know what unit testing is and that tests are as important as the production code (or more important sometimes).

There are several ways to organize tests and their data (provided input and expected output). I’m testing the very simple sum function of a calculator because my focus here is on the tests, not on the tested code. Everything can be applied in more advanced contexts.

// calculator.hpp

#ifndef CALCULATOR
#define CALCULATOR

namespace calculator {

inline int sum(int a, int b) { return a + b; }

}  // namespace calculator

#endif  // CALCULATOR

Simple tests

The simplest test would call the function with some arguments and verify the result. I can create multiple similar tests for specific cases (eg. overflow). It’s the form I’m trying to use as often as possible. I like stupidly simple tests that are extremely easy to understand.

#include <gtest/gtest.h>

#include "calculator.hpp"

TEST(CalculatorSimpleTest, Sum)
{
    const auto actual = calculator::sum(1, 2);
    const auto expected = 3;
    EXPECT_EQ(actual, expected);
}

Table-driven tests

It’s a method I typically use when multiple simple tests (as above) would repeat. I can easily add multiple test cases in a configurable way; rather than thinking about the test code, I’m focusing on the test scenarios.

I always encourage scenario-based tests. What is more important than how. I construct the tests starting with the scenarios I want to cover (basic ones, edge cases), not thinking about lines of code to be covered. Although very important, the coverage should be a result, not a scope. The most important thing is for the source code to work as I promised to the user through the public interface.

If there are special scenarios that I want to be clearly stated by the tests, I add simple tests focused on particular cases besides the table-driven ones that cover the base cases.

The table is a container where each element is a test case.

#include <gtest/gtest.h>

#include <vector>

#include "calculator.hpp"

TEST(CalculatorTableDrivenTest, Sum)
{
    struct Test {
        int a;
        int b;
        int sum;
    };

    const std::vector<Test> tests{{
        {1, 2, 3},
        {4, 5, 9},
    }};

    for (const auto& test : tests) {
        const auto actual = calculator::sum(test.a, test.b);
        const auto expected = test.sum;
        EXPECT_EQ(actual, expected);
    }
}

In case of failure, I can provide details to identify faster what case failed. It’s more useful when I have many test cases. I can print the case number or I can add a description.

for (const auto& test : tests) {
    const auto actual = calculator::sum(test.a, test.b);
    const auto expected = test.sum;

    static std::size_t test_case = 0;
    EXPECT_EQ(actual, expected) << "test case: " << ++test_case;
}

// or

for (std::size_t i = 0; i < tests.size(); ++i) {
    const auto& test = tests[i];
    const auto actual = calculator::sum(test.a, test.b);
    const auto expected = test.sum;
    EXPECT_EQ(actual, expected) << "test case: " << i;
}

// or

struct Test {
    std::string description;
    int a;
    int b;
    int sum;
};

const std::vector<Test> tests{{
    {"basic", 1, 2, 3},
    {"larger numbers", 4, 5, 9},
}};

for (const auto& test : tests) {
    const auto actual = calculator::sum(test.a, test.b);
    const auto expected = test.sum;
    EXPECT_EQ(actual, expected) << test.description;
}
If there are too many elements in the table and it clutters the test, I usually extract it into a header file to isolate it.

Parameterized tests

Parameterized tests are actually a declarative form of table-driven tests. I still have a “table” of cases to test my code with. And these cases can come from multiple sources.

In GoogleTest, the base for these tests is the TestWithParam class template. The template parameter is my Test struct. In the actual test, I can use the GetParam() method to get an instance of the Test struct.

#include <gtest/gtest.h> 

#include "calculator.hpp" 

namespace {

struct Test {
    int a;
    int b;
    int sum;
};

class CalculatorParameterizedTest : public testing::TestWithParam<Test> {
};

TEST_P(CalculatorParameterizedTest, Sum)
{
    const auto actual = calculator::sum(GetParam().a, GetParam().b);
    const auto expected = GetParam().sum;
    EXPECT_EQ(actual, expected);
}

Now I have to tell my CalculatorParameterizedTest class where to get the test parameter from. I need to instantiate my test suite and pass a list of test parameters. And I have multiple ways to do this.

List of parameters

The most basic way is to pass a list of Test objects when I instantiate the test suite. The suite needs a prefix, the test class name, and the list of parameters.

INSTANTIATE_TEST_SUITE_P(InlineValues, CalculatorParameterizedTest, testing::Values(
    Test{1, 2, 3},
    Test{4, 5, 9}
));

Container of parameters

If the list is too big, I can extract it into a container (that can come from a header file).

namespace {

std::vector<Test> GetTests()
{
    return {
        {1, 2, 3},
        {4, 5, 9},
    };
}

}

INSTANTIATE_TEST_SUITE_P(Container, CalculatorParameterizedTest, testing::ValuesIn(GetTests()));

Parameters from JSON file

Another option is to extract the test data into a text file. As such, it is totally decoupled from the C++ code. It can be any format I wish.

If I choose this method, anyone can generate the data without C++ knowledge (mostly if a different team generates test cases to cover all possible scenarios based on requirements).

When the data changes, I don’t need to compile the tests again because they will read the file on each execution.

As an example, I chose the JSON format and the nlohmann/json JSON parser library.

My file looks like this (an array of objects that match the Test struct):

[
  {
    "a": 1,
    "b": 2,
    "sum": 3
  },
  {
    "a": 4,
    "b": 5,
    "sum": 92
  }
]

The full source code will be more helpful. Now I just point out that I need a parser function to parse the JSON content into objects of Test type and a function to read the file.

namespace {

inline void from_json(const nlohmann::json& j, Test& test)
{
    test.a = j.at("a");
    test.b = j.at("b");
    ttest.sum = j.at("sum");
}

std::vector<Test> GetTests(const std::string& path)
{
    std::ifstream input(path);
    nlohmann::json j;
    input >> j;
    return j;
}

}

INSTANTIATE_TEST_SUITE_P(Json, CalculatorParameterizedTest, testing::ValuesIn(GetTests("input.json")));

Input file location

The tests run in the build directory. But the input file is not there. It’s usually somewhere in the tests directory. And when the test using the file is executed, it will not find the input file.

When I add a test in CMake, I can specify the working directory. I can set the current source directory, where the test and the input files live: add_test(NAME … WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}).

add_executable(calculator calculator_test.cpp)
target_link_libraries(calculator gtest gtest_main)
add_test(NAME calculator COMMAND calculator WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR})

User-friendly error messages

By default, when a test fails, the framework prints the current parameter. But for a parameter like I have, a struct, it does not know how it can be printed. So you’ll get the byte content, which is not very useful.

[ FAILED ] Json/CalculatorParameterizedTest.Sum/1, where GetParam() = 12-byte object <04-00 00-00 19-00 00-00 09-00 00-00> (0 ms)

To customize the error message, I place a function with the signature std::string PrintToString(const Test&) inside the namespace of the Test struct. It will be looked up for and used.

std::string PrintToString(const Test&amp;amp; test)
{
    std::stringstream ss;
    ss << "> " << test.a << " + " << test.b << " = " << test.sum;
    return ss.str();
}

Now the message is:

[  FAILED  ] Json/CalculatorParameterizedTest.Sum/1, where GetParam() = > 4 + 25 = 9 (0 ms)

Comparison: table-driven vs. parameterized tests

I can’t say which of the table-driven or parameterized approaches I prefer. I would even say they are kind of the same. For both, the input can be inline, in a header file, in a text format. A difference is that the parameterized version is declarative, while for the table-driven I “orchestrate the scenario runner”.

The choice that I see is between declaring the test

INSTANTIATE_TEST_SUITE_P(Json, CalculatorParameterizedTest, ...);

and writing a loop

for (const auto& test : tests)

Any method will do as long as the tests are correct and easy to maintain.

References

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.