Home > Articles > Programming > C/C++

C++ Reference Guide

Hosted by

Toggle Open Guide Table of ContentsGuide Contents

Close Table of ContentsGuide Contents

Close Table of Contents

The Debate on noexcept, Part II

Last updated Jan 1, 2003.

In the previous part I outlined the limitations of dynamic exception specifications and discussed the adoption of the new C++0x noexcept keyword into the FCD. This part will dig deeper into the technical aspects of noexcept and the controversy surrounding it.

Why noexcept is Needed

The proponents of noexcept claim that marking certain functions as noexcept may improve performance -- especially in Standard Library algorithms and containers, as opposed to using throw(). The deprecated throw() exception specification forces the compiler to generate auxiliary code to intercept runtime violations of a function's exception specification. That auxiliary code is needed for unwinding the stack, among the rest. This property of dynamic exception specifications is what made them rather useless. If the implementation must always assume that an exception might be thrown -- even from a function declared throw() -- why bother with exception specifications in the first place? As opposed to throw(), noexcept allows the compiler to forego the generation of that auxiliary code. As a result, the compiler can generate code that's more efficient -- both the code of the function flagged as noexcept, and the code that calls such a function.

Potential Complexities

Troubles start when the noexcept guarantee is violated. Roughly speaking, we can divide noexcept violations into two major categories:

Violations that the compiler can detect. According to the proposal, the compiler should issue a diagnostic (either a warning or an error) when it can tell that a function declared noexcept will violate this guarantee, as in:

int func() noexcept
 throw SomeException("failure");

func() is ill-formed. The compiler can see that func() always throws an exception, thus violating its noexcept guarantee. A more realistic example of noexcept violations requires more subtle code analysis:

int f() noexcept
 std::string s("test");

Should f() pass compilation? A rigorous exception safety model requires that the compiler shall issue at least a warning because the constructor of std::string might throw. So, what should the compiler do in this case? The proposal doesn't prescribe a normative policy. However, the tendency is to accept this code, following the "trust the programmer" maxim.

What should the implementation do if f() throws an exception? According to the proposal, std::terminate() shall be called, without unwinding the stack. Others prefer to leave the policy of handling noexcept violations implementation-defined or even undefined. The FCD requires that std::terminate() shall be called.

Violations that the compiler cannot detect. Suppose you have a function f1() that calls another function f2() defined in a different translation unit. f1() is declared noexcept, whereas f2() isn't. When the compiler processes the definition of f1() it cannot be certain that f1() will not throw because it has no access to the definition of f2(). Again, the current proposal doesn't prescribe a universal policy for such cases. However, it appears that the compiler should respect the noexcept guarantee of f1() -- so long as there's no positive evidence flouting this guarantee. Consider a source file that looks like this:

void f2();
void f1() noexcept

The compiler will accept the code without a hitch. However, in the following example the compiler is expected to complain:

void f2()
throw "error!";
void f1() noexcept //should cause compilation error
 f2(); //this function throws

This partial compile-time checking policy raises two more issues.

Compilation Complexity and Possible Redundancy

The first issue is compilation complexity. If the compiler is expected to perform source code analysis that "sees through" function calls, build-time will increase considerably.

You may have noticed something strange here -- noexcept is seemingly redundant. If the compiler checks whether the noexcept guarantee isn't violated, why not leave this task entirely to the compiler? The compiler knows (at least in some cases) which functions might throw and which ones will never throw. Why force the programmer to declare functions as noexcept? This argument may sound familiar to you: that's exactly my argument against using inline. Both keywords suggest an optimization strategy that the compiler is free to reject. So, does the compiler need noexcept at all?

Theoretically, we can let the compiler express the notion of a noexcept function with some name-mangling magic. This will ensure (with a little help from the linker) that calling a function that's been flagged as noexcept will not incur the overhead of exception handling. However, no one has proposed such a model because it will break the current C++ ABI. Although the C++ABI is platform-dependent, it's a relatively stable feature. Programmers get upset when it changes because they have to recompile and relink every piece of code, including legacy shared libraries. Besides, such a model might make C++ compilation and linkage unduly complex and time-consuming.

In the last part of this series I will show why noexcept isn't truly redundant, and conclude whether it's a useful feature or one that should be scrapped.