If you’re wondering why I didn’t write up a trip report for the November 2019 Belfast meeting – it’s because I gave that report live on CppCast instead. So check it out if you want to hear what happened at that meeting!

But more recently, last month the C++ committee met in Prague, Czech Republic, for the final meeting of the C++20 release cycle. Which means: C++20 is done!

As usual, if you want to hear a detailed report of what actually happened, I can refer you to the excellent trip reports on reddit and on Herb Sutter’s blog.

I personally believe that C++20 is the most important update of the standard in this language’s history. It will change the way we write code, the way we think about classes and functions, and the way we organise and compile our programs in profound ways.

But there are numerous other talks and articles on C++20 and its great features.

So in this report – even more than I usually do – I’d like to focus instead on the things that I was working on myself during the Prague meeting (even if they are a bit niche compared to the big features). Just in case you’re curious what I’m actually doing on the committee these days!

Fixing CTAD for C++20

Let’s start with a topic not actually related to audio.

Prague was my 10th committee meeting in a row. But it was the first (and perhaps not the last?) meeting where I spent most of my time in CWG (the Core Working Group, the room responsible for core language feature wording).

The reason is that I was involved in a C++20 proposal to improve CTAD (class template argument deduction), a feature introduced in C++17. Specifically, my contribution was to figure out how to make CTAD work for aggregates. Consider:

template <typename T, typename U>
struct pair {
    T first;
    U second;

pair p = {1.0, true};

Here, you want CTAD to deduce pair<double, bool>, but it turns out that in C++17 this only works if you add an explicit deduction guide:

template <typename T, typename U> pair(T, U) -> pair<T, U>;

which really is just boilerplate that doesn’t contain any useful info. In C++20, the compiler will finally generate this for you. Which means that this more interesting case will also now work without an explicit deduction guide (and this one we actually use in our production code, it’s super useful!):

template <typename... Fs> 
struct overload_set : Fs... { 
    using Fs::operator()...; 

auto print = overload_set {
    [](int i) { std::cout << i << '\n'; },
    [](bool b) { std::cout << std::boolalpha << b << '\n'; }

void test() {
    print(true);  // prints 'true'!

With the help of CWG, I went through several iterations of wording for this feature back in Belfast, and finally produced this version, which got voted into C++20 there.

Unfortunately, since then, we discovered that this specification is completely broken. It breaks std::array in many unfortunate ways (because it has an explicit deduction guide but is also an aggregate). It breaks copy construction for aggregates. It breaks lots of perfectly valid C++17 code. And it’s also completely unspecified how all this should behave if one of the aggregate elements is a pack expansion. This, notably, is exactly the case for the overload_set example above, one of the main motivations for this paper.

So my mission in Prague was clear: I had to sit down and fix my own mess. I spent two days in Prague doing nothing but that. Here is the result, which is now in C++20.

It’s probably the hardest and most mind-bending piece of wording I ever had to produce, but I think I got it right… mostly. There are still two or three exotic use cases which don’t quite behave like expected, but I don’t think they will cause issues in the wild. We will fix them as Core issues (the committee term for known language wording bugs).

Array-like object representations

Moving on to things targeting C++23 and beyond, and also increasingly more audio-related.

In Prague, I presented my paper P1912R1 on array-like object representations in EWG-I (the “EWG Incubator”). We had a positive discussion and I received good feedback. In the end, they approved the idea and forwarded the paper to EWG (the Evolution Working Group, the room that decides on the design of new C++ language features). So I expect we will have a proper discussion about this feature in EWG at one of the next committee meetings. In the meantime, I will rework the motivation, improve the structure of the paper, and add more examples.

The basic idea takes off from a very useful property of std::complex.

Because the implementation knows that the value representation of a std::complex<float> is the same as that of a float[2] (that is, for an object of either type containing the same two float values, the bytes look the same in memory), you can reinterpret_cast from the former to the latter, even though normally reinterpret_casting between two unrelated types like that would be undefined behaviour.

This is very useful when dealing with low-level DSP code. Imagine passing an array of complex numbers to a GSL function which expects gsl_complex. I was relying on this functionality a lot in uni when I was computing FFTs in C++ for my astrophysical simulations.

Weirdly, std::complex is the only type for which C++ allows us to do this. It works, because the C++ standard says it does – for this one standard library class specifically.

But what if you have your own complex number type which represents the value in polar form instead? Or quaternions? Or any other class which is essentially just an array of numbers when you look at the representation? Like a SIMD pack? In audio processing, this comes up a lot in the form of __m128 and friends. We want to be able to pun such types to plain arrays as well, but without breaking the C++ type system.

It turns out that’s what everyone is already doing in practice. Just search your DSP library for usages of reinterpret_cast! Another great example is Qt’s QColor, a class which is probably in many C++ programs out there. It’s doing the same kind of type punning between structs and arrays, except it’s doing that with unions and not reinterpret_cast.

Which doesn’t change the fact that all of the above is undefined behaviour according to the C++ standard. After being asked by audio developers to finally fix that in the language, that’s what I’m trying to accomplish now. Stay tuned –  I’ll write more about this proposal once it hits EWG and things get serious.

Portable assumptions

This is another one for DSP engineers and anyone else who needs to squeeze the last ounce of performance out of their code. Let’s assume you are implementing an audio limiter:

void limiter(float* data, size_t size, float max) {
    __assume(size % 8 == 0);
    for (size_t i = 0; i < size; ++i)
        data[i] = std::clamp(data[i], -max, max);

If you know that your buffer size is always a multiple of eight, because it happens to be an invariant of your code (you programmatically guaranteed it to always be true), injecting that fact into the compiler lets it generate faster code with fewer instructions (as it auto-vectorises the loop, it can skip the prologue and epilogue). The above syntax works with MSVC and ICC; Clang spells this as __builtin_assume instead.

This is a very useful intrinsic for low-latency applications. It is very dangerous – if the assumption you wrote is false, you are instantly in undefined behaviour land. So this is a really sharp knife that should only ever be wielded by experts. It should only ever be applied locally to enable very specific optimisations that you know to be safe, but the compiler wouldn’t be able to do otherwise. But if you know what you are doing, this might give you those extra microseconds!

I have been working on standardising this feature in the form of an attribute [[assume(expr)]]. It will take some time and several more committee meetings before this feature can actually end up in C++23. But in Prague I made some significant progress: EWG voted that they want this feature, and told me to come back at the next meeting with wording.

Well, EWG, your wish is my command! After that session, during one evening at the hotel bar in Prague, I managed to produce wording for assumptions which I believe specifies it correctly (thanks to all the wording experts I was pestering that evening…). I will now get the specification checked by as many compiler implementers as possible, conduct some benchmarks, and come to the next committee meeting with an updated paper containing all of that. I am pretty confident the feature will make further progress there.


Finally, what about P1386, also known as std::audio, our proposal to add an API for audio I/O to the C++ standard library?

The paper got stuck and we didn’t make much progress with it over the last half year. The reasons are twofold: first, myself and the Guys just can’t find the time to make progress with the implementation, because it is so much work and we all have jobs. And second, because the committee can’t yet make up their mind about whether or not they want such a proposal at all. 

In P1386, we propose a low-level API, modelled after existing practice in established cross-platform libraries such as portaudio and JUCE. It lets you enumerate the audio devices currently available (such as speakers, headphones, or external soundcards), connect to one of them, hook up an audio callback and start moving raw bytes around, representing audio in linear PCM. If the device configuration changes, you get another callback.

Some experts have questioned this approach. For example, on mobile platforms, you don’t get access to the devices. Instead, their APIs are usually built around “audio sessions”. Your app can tell the OS which type of content it wants to play (music, voice, etc), and the OS then routes the audio automatically. If the user plugs in headphones, the OS will automatically switch your app to that. If the user gets a phone call, the OS will fade out / interrupt your audio in the way that it thinks sounds nicest for the type of content being played. And then there are proprietary encoding formats, and many other things that P1386 doesn’t cover at all. Do we need to have a completely different type of audio API, much more high level, to accommodate what the regular developer would want from a standard audio API?

To answer these questions, I’ve been invoved in another paper, which was first discussed at this Prague meeting: P2054 “Audio I/O Software Use Cases”. The discussion there was very productive and led to a post-Prague iteration of the paper which can be found here: P2054R1.

The idea is that we try to collect all use cases that different types of users would like to be covered by a standard C++ audio library. Some of these use cases are more reasonable than others, and together they serve to map out the edges of such a possible facility.

I believe that this paper covers some use cases quite well, notably for mobile audio apps and professional music tech apps (because we have some expertise there). We tried to cover embedded audio as well, by talking to developers from this field.

Other areas like game audio, telecommunication apps and audio analysis are much less well covered. So if you are working in one of those fields, please check out the paper and let us know what other use cases not listed there you would expect to be covered by a std::audio facility!

Once this list of use cases looks complete enough, the plan (as approved by SG13 in Prague) is to conduct a survey where we let people vote on which cases they think std::audio should serve. My hope is that, after this survey, we will know what general direction the committee wants std::audio to pursue, and how well P1386 is covering it. Which will finally unblock the technical work. So, even though that paper hasn’t progressed much lately, you can look forward to more progress in the near future!

Final words

So those were the things I was working on in Prague. I have to say that the whole organisation of the meeting was extraordinary. Even though we had more people attending a committee meeting than ever before, the organisers handled this challenge exceptionally well and put on an amazing meeting, including an unforgettable gala dinner inside a stunning gothic church! Congratulations to Avast, and Hana Dusíková in particular, for pulling this off.

In these uncertain times, I don’t know when we will meet again. The next ISO C++ committee meeting in Varna, Bulgaria, got cancelled, as ISO halted all in-person committee meetings until end of June. The next scheduled one is in November 2020 in New York – let’s hope the current Covid-19 crisis will be over by then. C++20 turned out really well, and I am sure C++23 will turn out great, too! Here’s to another three years.

Stay safe and healthy, everyone!