Friday, June 20, 2014

Deal with OOM conditions

Imagine the following sci-fi scenario: your code is in the middle of aborting a nuclear missiles launch fired by mistake, it needs to allocate some memory and unfortunately the BOFH is using all the physical and virtual memory because, he just can.

What shall we do?

The life of thousand people depends on that function you need to call, passing to it some fresh allocated memory. The operator new (unless the placement one is called) deals with OOM condition throwing a bad_alloc or returning a null-pointer in case the nothrow version of it is used.

But as programmer what can you do when a bad_alloc is thrown or a null-pointer is returned?

There are several options, but the most "nifty" one is the following.

When the operator new is not able to allocate the required memory it calls a function, at this point the function can try to free some memory, throwing an exception or exit the program. Exiting the program is not a good option I have to say, indeed the caller of the operator new (or operator new [] for the matter) expects a bad_alloc (or a derivation of it) or a nullptr (in case the nothrow was used).

A programmer is able to specify the function to be call in case of OOM with the following function:

the operator new will keep calling the specified function every time it tries to allocate memory and it doesn't succeeded. A programmer can exploit this mechanism in the following way:
  1. Allocate a the programming startup a bunch of memory reserving it for future uses.
  2. Install the new handler that will free the reserved memory, in case the reserved memory was already release then throw bad_alloc. 
The following code does exactly what described:

issuing a ulimit -v 100000 before to run it (in order to decrease the memory that can be used), the output is the following:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

As you can see at least once we were able to free some memory and the first allocation after the OOM condition was able to allocate memory due the fact some was freed by us, unfortunately there were no more space on the second call. You have no excuse anymore to have a crash due to OOM condition, what you can do at least is to free the memory, launch a warning, writing in the logs, send a message to a pager or whatever action that soon the memory will be over for real!

Wednesday, May 21, 2014

Prevent exceptions from leaving destructors. Now!

Any seasoned C++ programmer should now that permitting an exception to leave the destructor is bad practice, googling for "throw exception destructor" it leads to enough results convincing yourself that is a bad practice (see for example Meyers's "More effective C++" Item 11). Most of the arguments are: "if an object is destroyed during a stack unwinding then throwing an exception it triggers the terminate function" or "if an STL container is being destroyed it start to destroy all his contained elements and given the fact the STL containers do not expect an exception being thrown then it will not complete the destruction of the remaining objects".

If you still are not convinced by those arguments then I hope you will buy at least the following. Let's look at a possible implementation of an unique ptr (apart the -> and * operators):

and a possible use:

As you can see the AutoPtr::reset() deletes the stored pointer and then is not able to nullify it due the throw, as soon as the "a" instance goes out of scope due the stack unwinding then ~AutoPtr deletes again thePointer. A possible implementation of reset can be the following:

but unfortunately it not saves you! Indeed in c++11 specification you can "find" the following:
12.4.3: A declaration of a destructor that does not have an exception-specification is implicitly considered to have the same exception-specification as an implicit declaration (15.4).
and again:
Whenever an exception is thrown and the search for a handler (15.3) encounters the outermost block of a function with an exception-specification that does not allow the exception, then, — if the exception-specification is a dynamic-exception-specification, the function std::unexpected() is called (15.5.2), — otherwise, the function std::terminate() is called (15.5.1).
that means that throwing an exception from a DTOR terminates your program and it doesn't matter if a stack unwinding is going on or not.

This simple example

does generate a crash if compiled in c++11 mode with gcc (4.8 and 4.9) and clang (3.5) while with intel icc 14.01 doens't call the std::unexpected either the std::terminate (time to fill an icc bug?)

Sunday, March 9, 2014


C++11 introduced the ability to "ref-qualifier" methods. The most known qualifier is the const one:

however now is also possible to ref-qualify *this

let see how this can be of any use. Immagine to have a factory building heavy objects and returning them by copy this way:

in the following scenario we can avoid an useless copy:

we can avoid the copy if Jumbo is movable overloading the method getJumboByCopy in case the object on which I'm calling it is a temporary:

To be honest the example shows a scenario with other problems than the one mentioned (for instance if the object Jumbo is so big why permitting the copy then?) but I hope you got the idea.

Sunday, February 16, 2014

The under-evaluated delete specifier

As you should know by now in C++11 we are able disable certain signatures in our classes. Most of the time this is used to disable copy constructor and assignment operators

this way is much better than the old way where the programmer had to declare private both member and then not implement them getting an error either at compile time or either at linking time.

The delete specifier can disable some automatic overloads consider indeed the following code

it works perfectly but if we do not want that automatic conversion (note that explicit can not be used here) then the delete specifier can come in handy

A typical mistake in C++ is to store a reference (or a pointer for the matter) to a temporary object leading to a disaster. This mistake is one of the argument java guys put on the table when they are arguing against C++.

The following class is a perfectly working class but used wrongly can store a reference to a temporary object

The delete specifier can help us again indeed we can disable the constructor with a rvalue reference and avoid such use:

now the code above will lead to a compilation error in case we are trying to build the class with a temporary string, you should note that a "const &&" is needed, indeed without the const specifier passing a "const std::string foo()" will not led to a compilation error

Time to add to my coding rules a new rule!

Sunday, January 5, 2014

A bad workman always blames his tools (Miguel de Icaza inside)

Recently Miguel de Icaza reveleaded in his blog the fact that they regret the decision to develop Moonlight in C++, you can read about it in here:

My first thought was, wow he found a way to communicate with a parallel universe where a "Miguel de Icaza" took the decision to got for C and he is now comparing the two projects.

Anyway, he is Miguel de Icaza after all, he is behind the Gnome and Mono projects. Chapeau!
I was curious to look their code base and after having opened some sources here and there I was horrified and I mean it.

Issues I found (just opening a few files not doing a full code inspection and without any static analysis chec):

  • CTORs are not using initialization list
  • Arguments of CTORs and functions are not getting parameters as const reference copying basically all the arguments passed
  • Methods that should be marked as const are not(as example KeyTime::HasPercent), this means that const correctness is not used around in the code.
  • classes not meant to be modified after the construction do not have their member marked const
  • Not all classes have all their member initialized
  • Classes have their destructor marked as virtual even when not needed and also if the DTOR is marked as virtual why the CopyCTOR is not implemented or then "disabled"?
  • List reimplemented from scratch and instead to make the List template the List is a standarad bidirectional implementation with Node hosting only the next/prev pointers and a virtual destructor (nice vptr overhead when not needed) then a derived class from Node a template GenericNode. The user of this class has to create his own class Inheriting from GenericNode (see EventObjectNode).
  • brush.cpp:  You accept first a possibile division by zero and then you fix the result
double sx = sw / width;
double sy = sh / height;
if (width == 0)
sx = 1.0;
if (height == 0)
sy = 1.0;

  • collection.cpp the following statement looks suspicious:  

if (n == 0 || n % 1 == 1) {...}

General remarks on the code:
  • I haven't seen a single class not copyable
  • on around 250K line of code just a mere 128 asserts
  • Variables assigned twice without use the first assigned value (see as example CornerRadius::FromStr implementation).
  • Scope variables can be reduced 
  • Pointer cast C-Styled
  • Unused variables

So dear Miguel de Icaza, please fix your code base then we can talk about performances and memory efficiency. Unless your regret has to be read as the following: "We regret to have chosen C++ without having a deep knowledge of it and without any best practice to follow".

Wednesday, August 28, 2013

STL is not thread safe, get over it

STL is not thread safe, get over it. Why it should be after all? STL is plain C++ code, compiler doesn't even know the existence of the STL, it's just C++ code released with your compiler, what the STL has in common with the C++ language is the fact that it's standardized. Given the fact that C++ and STL is standardized you can expect to have STL implemented on every platform with same guarantees enforced by the standard. You are not even obliged to use the STL deployed with your compiler indeed there are various version out there, see the roguewave ones for example (
Let's take as example the std::list, let's suppose for a moment that the stl implementation it's thread safe,
some problem arise:

  1. If it's not needed to be thread safe you will get extra not needed overhead
  2. What shall do a thread calling a std::list::pop_back on an empty list?
    • Waiting for a std::list::push?
    • Returning with an "error"?
    • Throwing an exception?
    • Waiting for a certain ammount of seconds that an entry is available?
  3. It should be used in an enviroment with a single producer/single consumer, in this case it's possible to implement it without locks.
  4. Shall be multiple readers permitted?
Yes sure you can solve all the points above with a policy template, but just imagine your users complains.

Well, again, get over it, STL standardized is not thread safe you have to create your thread safe list embedding a real std::list, after all making a wrapper around an STL container is a so easy exercise that if you find difficult to do it yourself then you have to ask: "Am I ready to start with multithreading programming" even if someone provides me an out of the shelf std::list thread safe?
Consider that two different instance of  STL containers can be safely manipulated by different threads.

PS: I have written about std::list instead of the most widely used std::vector because std::vector for his own characteristics has to be used in a more "static" wau with respect to an std::list and using an std::vector as a container used by multiple threads (producer/consumer) is a plain wrong choice.  

Thursday, June 13, 2013

Code inspecting Stroustrup (TC++PL4) (2nd issue)

Since I learned C++ reading one of the first edition of TC++PL and even if I'm programming using C++ since 2000 or so I'm reading the new TC++PL4 carefully as if this language is totally new to me. It seems my last post about an error found on this book will be not the unique and here we are again.
In he introduces conditions and how two threads can interact each other communicating using events, it's presented the classical producer / consumer interaction exchanging Messages trough a queue, and this is the poor implementation proposed:

  class Message {
    // ...

  queue mqueue;
  condition_variable mcond;
  mutex mmutex;

  void consumer() 
     while(true) {
        unique_lock lck{mmutex};
        while (mcond.wait(lck)) /* do nothing */;
        auto m = mqueue.front();
        // ... process m...

  void producer() 
     while(true) {
        Message m;
        // ... fill the message ...
        unique_lock lck{mmutex};

This implementation is affected by at least three issues:

  • Unless very lucky the queue will grow indefinitely: that's because basically the consumer will wait at each cycle even if queue contains something, at the same time it has a chance (I repeat a "chance") to exit from the  condition_variable::wait() only each time the producer puts something in the queue.
  • The consumer can miss the condition_variable::notify_one event, indeed if the producer does the notify_one() but the other thread hasn't yet executed the wait() the consumer will block for no reason
  • The producer holds the unique_lock for more time than needed, the mutex has to only protect the queue not the condition as well
Let see how those producer / consumer should have been implemented:

  void consumer() 
     while(true) {
        unique_lock lck{mmutex};
        while (mqueue.empty()) { // the empty condition has to be recheck indeed the thread
                                               // can get sporadic wakeup without any thread doing a notify
        auto m = mqueue.front();
        // ... process m...

  void producer() 
     while(true) {
        Message m;
        // ... fill the message ...
           unique_lock lck{mmutex};
        }  // This extra scope is in here to release the mmutex asap

In my opinion this should have been the version of the producer/consumer in TC++PL4, as you can see
with a simple extra scope and the right while(...) the issues reported in the bullets are solved.

There is another problem, I have to admit that this issue most of the times is a minore one:
  • The producer can issue notify_one() even if not needed, and this can be a performance issue 
to address it the producer has to "forecast" if the consumer can be in a blocked status, and this can happen only if after having acquired the mmutex the queue is empty, this is the final version of producer:

   void producer() 
     while(true) {
        Message m;
        bool notifyIsNeeded = false;
        // ... fill the message ...
           unique_lock lck{mmutex};
           if (mqueue.empty()) {
             notifyIsNeeded  = true;
        }  // This extra scope is in here to release the mmutex asap
        if ( notifyIsNeeded ) {

Writing correct code is not easy and writing correct multi-threaded is damn hard.