A look into boost::thread
published at 11.09.2013 20:36 by Jens Weller
Save to Instapaper Pocket
In the 3rd week of September I'll be giving a 2 day training on boost::thread, I thought its nice to post a short overview of the boost thread library...
Now before I get started with boost::thread, I'd like to point out that task based concurrency is the better approach, if you need a lot of recurring concurrency in your program. While boost libraries yet have no real task based concurrency included, there are libraries such as Microsofts PPL, Intels Threading Building Blocks, HPX, Apple libdispatch (but thats C) and the Qt Concurrency add on, that can provide you with task based concurrency in your program. There are several reasons for this, but the 2 most important ones are that with more synchronisation needed, performance of multithreaded programs decreases, so that adding new cores and threads does not give you the speedup you would expect. The second is, that low level threading and concurrency is hard to get right, it can be very error prone. Deadlocks and not properly protected resouces can be one of the many errors.
boost::thread overview
First, lets have an overview over the dependencies of boost::thread
{{image::../../files/blog/bda/boost_thread.png?width=700&mode=crop&rel=}}
boost::thread and the C++11 transition
First I want to emphasize that during the last releases of boost, there has been a lot of work on boost thread. Most of this work lets boost::thread have a similar interface to std::thread. I've used std::thread earlier this year to count words. So is in 1.54 already support for .then in futures, which is only proposed to be part of C++14 or even later. So boost is adopting quite fast here, but this transition of course also brings a few sublte bugs with it, so be careful with the newer features. For my training on boost::thread I used the documentation of 1.49 (the version my client uses) and compiled the code against 1.49 for now. boost::thread is a older library in boost, and has seen different changes, but the version from 1.49 is pretty stable, and mostly comparable to the one in 1.54, except the changes for supporting std::threads interface. std::thread in C++11 is widely following the design in boost::thread, but standardization often brings a few minor tweaks. For details you can refer to the changelogs of boost::thread between 1.50 - 1.54.
boost::thread
The library consists out of a few classes and helperfunctions. As far as I understand there is no boost::thread namespace, so that most classes are in boost. The most important is the thread class it self, which holds the internal thread handle and offers the methods needed to communicate with the running thread. An instance of boost::thread is movable, but cannot be copied. A short example:
int start_thread() { boost::thread t(hello_thread); }
In this case hello_thread is a simple function printing "hello thread" to the console. This peace of code is looking innocent, still, in newer versions of boost::thread and also using std::thread will crash your application in calling std::terminate. This happens, if the stackobject t is destroyed before the thread ends running. Now for boost::thread that is only true for > 1.50, so older code might still rely on that the destructor of a running boost::thread instance calls detach instead of std::terminate. So, in order to be able to do things properly, the example should call either join or interrupt. You can test via joinable if a thread can be joined, join will wait as long as the threads needs to finish. Calling the interrupt method will cause the thread to throw a boost::thread_interrupted if it reaches or is currently at an internal interruption point, that f.e. could be a call to sleep.
Sleep brings us the namespace this_thread, which refers to the local thread the code is currently running in. this_thread::sleep(boost::posix_time) will let the thread sleep for the time chosen. This also acts as an interruption point. this_thread::get_id will give you the current thread-id. this_thread::yield will cause the thread to give up its current timeslice, and let the OS process the next thread earlier.
Also there is boost::thread_group, which lets you create a number of threads, and offers convinient functions to manage them. boost::thread_group can be the very basic building block of a threadpool.
Synchronisation
You cannot use boost::thread without entering the multithreaded domain. As the main function already runs in a thread, with starting another thread, you already have 2 of them. You will need to synchronize the access to resources shared amongst threads. The most basic way to do this is using a boost::mutex. Calling mutex.lock() will cause the following code to be protected from being executed on another thread in parallel. This section ends with calling unlock. Calling unlock lets the next thread, which might be waiting on locks position, to execute the critical code. Calling lock and especially unlock directly on the mutex, might be a bad idea. The code in between could throw an exception, and unlock is never called. For this purpose exists the lock_guard class, which simply locks the mutex in its constructor, and unlocks in the destructor. So, lock_guard protects a scope against other threads, as soon as its instantiated with a mutex. There is also more advanced lock classes, such as unique_lock or shared_lock. The unique_lock class is used for write access, as then the lock needs to be unique for the thread. While the shared_lock allows several threads to share a resource for reading.
This is important, that often you will also need to protect your - actually threadsafe - reads from other threads via a shared_lock. This protects the underlying data from being modified while reading. Without lock, a thread that writes to the resource could obtain a lock it. This is especially true for containers.
Also boost::thread offers with condition variables a more advanced mechanism in signaling and waiting between threads. A reading thread can call wait on its shared condition variable, and the processing thread can call notify_one or notify_all once new data is available to process. notify_all will only notify the waiting threads.
Barriers are also supported by boost::thread, boost::barrier is the corresponding class for this. By construction of the barrier you have to tell it, how many threads shall wait on this barrier. Then, all threads will be waiting at the point where they call barrier::wait until the last thread does so. Then all of the waiting threads are released. This is useful if you want to synchronize the start of thread group.
Futures
There is also support for futures and the corresponding classes promise and packaged_task. A future is a handle to a value calculated asynchronously through a thread or locally. You can query its value with its get method, which will block until the thread is finished calculating. boost supports futures via the classes unique_future and shared_future, which share a common interface:
- get() - will block until the value is ready
- is_ready() - true if value is calculated
- has_exception() - exception was thrown instead of value being calculated
- has_value() - future has an available value.
- wait - the future waits for its result, and also calls a possible callback set to its task.
- timed_wait - lets the future wait for a certain time span / templated timed_wait method
- timed_wait_until - takes boost::system_time for waiting
In order to work properly with the future classes in boost, one also needs the packaged_task class, which can be seen as the producer of the value which the owner of the future is the consumer of. A simple example:
int fill_random() { return std::rand() % 1000; } int random_sum() { std::vector vec(100,0); std::generate(vec.begin(),vec.end(),fill_random); return std::accumulate(vec.begin(),vec.end(),0); } int main(int argc, char** argv) { std::srand(std::time(0)); boost::packaged_task<int> task(random_sum); boost::unique_future<int> task_future = task.get_future(); boost::thread task_thread(boost::move(task)); std::cout << task_future.get() << std::endl; boost::promise<int> mypromise; boost::unique_future<int> promise_future; mypromise.set_value(42); promise_future = mypromise.get_future(); std::cout << promise_future.get() << std::endl; return 0;
}
packaged_task is used to execute the task in a different thread, and also lets the user access the corresponding future. boost::promise is a little different, it lets you set the value, and so kind of emulate the future calculation. From boost 1.53 boosts implementation of future also offers the ability to set a callback via .then, which is then called once the calculation finished. Also there is boost::async, which mimics std::async from C++11.
Thread local storage
Sometimes a thread needs to have the ability to access variables only visible to the code running inside the thread. boost::thread supports this through the class thread_specific_ptr<T>, which will allocate the variable locally in the thread when needed. The class has the semantics of a pointer, and you can also access the pointer to the value via get(). This also can be used to initialize certain values in a thread.
Threads vs. Tasks
As stated at the beginning, task based parallelism is the much better approach especially when you have a lot of tasks to execute asychronously. The overhead of starting a new thread everytime is easily solved with a threadpool, but the effiecent implementation of such a threadpool isn't trivial. Some libraries such as TBB or PPL offer good support for task based parallelism. Still, Threads can be used for parallelism, but with adding more synchronisation, adding more threads will slow you down more and more. This is known as Amdahls law. As long as you spawn only a few threads doing additional work, I think you are fine to work with threads, but as soon as the pattern of tasks occurs, you should think about changing over to task based concurrency.
Join the Meeting C++ patreon community!
This and other posts on Meeting C++ are enabled by my supporters on patreon!