Professional C__ - Marc Gregoire [401]
// and/or we need to shut down this thread.
lock.unlock();
while (true) {
lock.lock();
if (mQueue.empty()) {
break;
} else {
ofs << mQueue.front() << endl;
mQueue.pop();
}
lock.unlock();
}
if (mExit)
break;
}
}
Code snippet from logger\FinalVersion\Logger.cpp
THREAD POOLS
Instead of creating and deleting threads dynamically throughout your program lifetime, you can create a pool of threads that can be used as needed. This technique is often used in programs that want to handle some kind of event in a thread. In most environments, the ideal number of threads to have is equal to the number of processing cores. If there are more threads than cores, threads will have to be suspended to allow other threads to run, and this will ultimately add overhead. However, the rate of arrival of events may mean that, at some times there are more events per unit time than can be processed. The solution in this case is to use the producer/consumer model, where a number of pre-created threads are waiting for something to do.
Since not all processing is identical, it is not uncommon to have a thread that receives, as part of its input, a function object that represents the computation to be done.
Because all the threads are pre-existing, it is vastly more efficient for the operating system to schedule one to run than it is for the operating system to create one in response to an input. Furthermore, the use of a thread pool allows you to manage the number of threads that are created, so depending on the platform, you may have as few as one thread or as many as 64.
Note that while the ideal number of threads is equal to the number of cores, this applies only in the case where the thread is compute bound and cannot block for any other reason, including I/O. When a thread can block, it is often appropriate to run more threads than there are cores. Determining the optimal number of threads in such cases may involve doing throughput measurements with the system under normal load conditions.
You can implement a thread pool in a similar way as an object pool. Chapter 24 gives an example implementation of an object pool. The implementation of a thread pool is left as an exercise for the reader.
THREADING DESIGN AND BEST PRACTICES
This section briefly mentions a couple of best practices related to multithreaded programming.
Before terminating the application, always use join() to wait for background threads to finish: Make sure you use join() on all background threads before terminating your application. This will make sure all those background threads have the time to do proper cleanup. Background threads for which there is no join() will terminate abruptly when the main thread is terminated.
The best synchronization is no synchronization: Multithreaded programming becomes much easier if you manage to design your different threads in such a way that all threads working on shared data read only from that shared data and never write to it, or only write to parts never read by other threads. In that case there is no need for any synchronization and you cannot have problems like race conditions or deadlocks.
Try to use the single-thread ownership pattern: This means that a block of data is owned by no more than one thread at a time. Owning the data means that no other thread is allowed to read/write to the data. When the thread is finished with the data, the data can be passed off to another thread, which now has sole and complete responsibility/ownership of the data. No synchronization is necessary in this case.
Use atomic types and operations when possible: Atomic types and atomic operations make it easier to write race condition and deadlock free code, because they handle synchronization automatically. If atomic types and operations are not possible in your multithreaded design, and you need shared data, you have to use a mutual exclusion mechanism to ensure proper synchronization.
Use locks to protect mutable shared data: If you need mutable shared data to which multiple threads can write to, and