Compile-time validation

Whenever possible, I try to ensure the validation of my program early. I can move some verifications from runtime to compile-time. As such, I will find some issues when I compile the program instead of while running it.

With good tests, I will still find issues earlier than the user. But with human error, some unpleasant cases can reach the user if tests do not cover them. And they might not always be easy to spot.

Context

Take the following situation:

    • A third-party library provides an std::array with some data (integers).
    • I convert this data into another std::array that my application owns (instances of a struct).

It’s as simple as it sounds.

#include <algorithm>
#include <array>
#include <cassert>
#include <numeric>

namespace lib {
using A = std::array<int, 9>;

inline A fetch()
{
    A a;
    std::iota(a.begin(), a.end(), 1);
    return a;
}
}  // namespace lib

namespace app {
struct S {
    int i = 0;
    S() = default;
    explicit S(int v) noexcept : i{v} {}
};
inline bool operator==(int i, S s) noexcept { return i == s.i; }

using B = std::array<S, 9>;

inline B convert(const lib::A& a)
{
    B b;
    std::transform(a.cbegin(), a.cend(), b.begin(), [](int i) noexcept { return S{i}; });

    return b;
}
}  // namespace app

int main()
{
    const auto a = lib::fetch();
    const auto b = app::convert(a);

    assert(std::equal(a.cbegin(), a.cend(), b.cbegin()));
}

 

The issue that might not be too easy to spot, especially if the code is more complicated, is that the third-party library could change the array’s size.

My code would still compile. And of course, given that I have a test (represented here by the assert), it would fail. But I would not know why it failed until I debug the code. And if I don’t have a test, I’m not covered.

Specifically for my code, if the third-party library’s array would get larger than my app’s array, I would get an undefined behavior because I would try to copy data beyond my array’s bounds.

Moreover, a change like that in the third-party library might be something really important for me and it should scream in my face.

The runtime approach

So if that change is so important, I can verify this case and throw an exception if needed.

inline B convert(const lib::A& a)
{
    B b;
    
    if (a.size() != b.size()) {
        throw std::length_error{"size mismatch"};
    }

    std::transform(a.cbegin(), a.cend(), b.begin(), [](int i) noexcept { return S{i}; });

    return b;
}

The issues with this approach are:

    • The error does not actually exist, it might exist, and this makes it difficult to test.
    • Exceptions might not be allowed or desired.
    • A new runtime responsibility is introduced.

Validate at compile-time

I can perform this simple validation during compilation, thus removing the issues I mentioned on the runtime approach.

The naive solution

At first, I thought about using a static_assert instead of the exception.

inline B convert(const lib::A& a)
{
    B b;
    
    static_assert(a.size() == b.size(), "size mismatch");
    
    std::transform(a.cbegin(), a.cend(), b.begin(), [](int i) noexcept { return S{i}; });

    return b;
}

I tried this with several compilers and it works with GCC 9, but not with newer versions or Clang/MSVC. I’m not sure why it works with GCC 9, but it might be some kind of bug. The convert function has a reference parameter and its lifetime is not known at compile-time, this being a requirement for a constexpr context.

If I can pass the input by copy – inline B convert(const lib::A a), then I’m fine, the compiler can understand everything at compile-time. This is not always the case because the input might be too big. Otherwise, it can be just what I need.

Template specialization

Another approach is to inspect, at compile-time, the types of the arrays. Because the types include the declared sizes. And the sizes are the properties that I’m interested in. I can implement a struct that deduces the size of the array from its type.

First, I declare a struct template:

template <typename>
struct array_size;

Then I specialize it for the std::array type. When the struct is passed the std::array type, it deduces the size (S) which I store in a struct member (size).

template <typename T, std::size_t S>
struct array_size<std::array<T, S>> {
    static constexpr std::size_t size = S;
};

size must be:

    • static to declare it constexpr
    • constexpr to use it at compile-time

And then I can get the size from the type:

inline B convert(const lib::A& a)
{
    B b;

    static_assert(array_size<lib::A>::size == array_size<B>::size, "size mismatch");

    std::transform(a.cbegin(), a.cend(), b.begin(), [](int i) noexcept { return S{i}; });

    return b;
}

I usually prefer compile-time

Unless I have specific and validated reasons not to, I try to resolve as much as I can during compilation. This gets me a lighter runtime not only in performance but also in maintenance. Not all applications need the best performance from the start, but they need code easy to live with.

The compile-time solution for getting the size of the array needs more code and more code needs more maintenance. But I’m happy because I removed code from runtime.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.