I'm currently working on a wireless networking application in C++ and it's coming to a point where I'm going to want to multi-thread pieces of software under one process, rather than have them all in separate processes. Theoretically, I understand multi-threading, but I've yet to dive in practically.

What should every programmer know when writing multi-threaded code in C++?

  • 243,077
  • 51
  • 345
  • 550
  • 6,123
  • 13
  • 41
  • 52
  • 19
    Surely 'beginner' and 'multi-threaded' are mutually exclusive? – Skizz Jan 22 '10 at 15:28
  • 4
    IMO, they shouldn't be anymore. We're heading full-steam to multi-core CPUs, even on mobile devices. Since C++1x will definitely have concurrency features, these things should be known even to beginners. – Kensai Jan 22 '10 at 15:33
  • 9
    IMO: Multi-Threaded programming is way to hard for even expert programmers. This is an area where the compiler should be doing the work. Rather than putting low level primitives into the language we need to be able to express parallelism at a higher level and allow the compiler to do the multi-threaded bit. – Martin York Jan 22 '10 at 15:50
  • 1
    @Skizz: You can be experienced with C++ and a beginner with multi-threaded programming. I wouldn't want a programming beginner to start writing multi-threaded programs. There's far too much that can go wrong and be missed on unpracticed code inspection and testing. – David Thornley Jan 22 '10 at 16:07
  • @David: Ah, the English language - so easy to obfuscate the intended meaning. I read it as a 'novice c++ programmer trying to write a multithreaded app' as apposed to 'experianced programmer trying to write a multithreaded app'. Perhaps the question should have the trailing 'as a beginner' removed to clarify. Unless Mark is new to C++ as well as being new to multi-threading coding. – Skizz Jan 22 '10 at 16:14
  • 4
    One advantage of having separate processes is added robustness (due to virtual memory). By going multi-threaded in a single process, you risk having the whole application go down because one thread corrupts the process's memory. You should consider message passing between your proceseses. – Emile Cormier Jan 22 '10 at 16:15
  • 3
    @David and Skizz: Not entirely new to programming, no worries on that front :) C++ isn't my go-to language but I get by fair enough in it. Multithreading is what I'm a beginner to – Mark Jan 22 '10 at 16:16
  • 1
    My opinion is that multithreaded programming is hard, but harder because of a lack of widespread knowledge and tools to manage it. I do not think intrinsics are the way out for procedural languages. – Paul Nathan Jan 22 '10 at 16:21
  • @Martin: you say that "Multi-Threaded programming is way to hard for even expert programmers" and I wonder what your definition of "expert" is. I guess that expert C# or Java programmers will not attempt to dive into it if it requires them to become proficient C or C++ programmers which is a more or less necessary prerequisite. If it were as hard as you say why do OSs provide it? – Olof Forshell Jan 26 '11 at 13:39
  • @Emile: all software can be made robust, including multithreaded. Some multi-threaded software is even error-free. To the uninitiated I guess writing multithreaded and reentrant software appears daunting if not scary. – Olof Forshell Jan 26 '11 at 13:44
  • @Olof: I meant robust in the same sense as fault-tolerant. Going multi-process instead of multi-threaded reduces the coupling between subsystems (namely, shared memory). Please cite examples of multi-threaded software than has been proven error-free. Note that "no bugs detected yet" does not equate to "error-free". – Emile Cormier Jan 26 '11 at 18:02
  • @Olof Forshell: Because you need a set of base primitives on-top of which you can build more complex ones. You question is like asking why do provide access to assembly when we have C++. Because higher level constructs will require building blocks of simpler lower level constructs with which to define the metaphors that they represent. – Martin York Jan 26 '11 at 19:53
  • @Emile: robustness to me includes fault-tolerance and you misquote me because I did not say that error-free means "no bugs detected yet." The key to writing error-free software is size and simplicity so to answer your question in the simplest form I would say that several threads simultaneously executing within the same code with that code doing simple tasks are error-free. The threading mechanisms might not be. The key as I see it is code simplicity and having minimal safe-guarding because safe-guarding tends to prevent execution bugs from occuring and being corrected. – Olof Forshell Jan 28 '11 at 11:19
  • @Olaf: I did not misquote you. I was making clear that, to me, the absence of detected bugs does not prove that a program is error free. You can only claim a program is error-free using a formal proof, or by exhaustively testing every possible state a program can be in (not only all combinations of inputs, but all the possible combinations of time slicing). I don't think this type of proof is even feasible in a multi-tasking program (whether it's threads or processes). – Emile Cormier Jan 28 '11 at 15:20
  • @Olaf: So it's practically impossible to prove correctness in a non-trivial, multi-tasking program. Therefore, in a safety critical system, you have to assume that some part of the program WILL fail. Using multiple processes (with it's separate virtual memory spaces) helps you isolate that failure to that particular subsystem, so that the whole system doesn't fail. You wouldn't want a fault in an X-ray machine user interface to affect the task that controls the X-ray power intensity, wouldn't you? – Emile Cormier Jan 28 '11 at 15:27
  • @Olaf: Of course, you don't necessarily always need a high level of fault tolerance. Separate processes might be overkill, and the risk inherent in multi-threading might be perfectly acceptable. That's why I suggested that the OP **consider** keeping multiple processes. I did not claim that multithreading was evil. :-) – Emile Cormier Jan 28 '11 at 15:33
  • @Olaf: You said "...safe-guarding tends to prevent execution bugs from occuring and being corrected.". How is this so??? It's like saying that range-checking prevents you from detecting range error bugs! – Emile Cormier Jan 28 '11 at 15:40
  • @Olof: Oops, I misspelled your name a few times. I apologize. – Emile Cormier Jan 28 '11 at 15:46
  • @Emile: don't worry about it! Olaf is the Norwegian spelling, Olof the Swedish. Ideally your range-checking should've been handled in the UI because that's where the problem belongs. What I meant was that you should, when writing your MT code, assume that the data input thus far is without errors and that the only errors remaining to handle are those that have to do with your app's processing. This makes your app smaller and simpler and faster. Assuming the MT code has to also handle errors caused by all previous steps in the process will make the MT code big and bloated and complex ... – Olof Forshell Jan 29 '11 at 05:55
  • ... and effectively hinder the process of doing something about the original bug. For the situation you describe I think a safeguard could be put in temporarily not to handle the problem for "all eternity" but to isolate the original bug with the objective to correct it. Then you remove the safeguard. When a program crashes you know the bug is in it or somewhere else. A crashing program is a good thing if you want to achieve error-free MT (or any other for that matter) code. Then there is the problem of bugs in the safeguards themselves. The simpler the better. The smaller the better. – Olof Forshell Jan 29 '11 at 06:08
  • @Emile: "no bugs detected yet" could also mean "error-free, we just don't know it." – Olof Forshell Jan 29 '11 at 06:11

22 Answers22


I would focus on design the thing as much as partitioned as possible so you have the minimal amount of shared things across threads. If you make sure you don't have statics and other resources shared among threads (other than those that you would be sharing if you designed this with processes instead of threads) you would be fine.

Therefore, while yes, you have to have in mind concepts like locks, semaphores, etc, the best way to tackle this is to try to avoid them.

  • 5,752
  • 5
  • 49
  • 59
  • 1
    It is impossible to have static resources shared across threads? Or just not recommended? Is that not what the "volatile" property is for? (correct me if I'm wrong, which I very well might be.) – Mark Jan 22 '10 at 15:29
  • It is possible, for ex. by defining a static variable in the global scope. You can do that, and sometimes you have to, but in those cases, you have to restrict access to one thread at a time, for ex. through a lock, semaphore, etc. That typically introduces performance issues (because threads have to wait to enter a critical section). So, if you design the thing properly, you should be able to minimize multi-threading issues. – Ariel Jan 22 '10 at 15:45
  • 15
    `volatile` has nothing to do with multithreading. The keyword is meant for memory-mapped hardware - to ensure that when you write to a memory location mapped to some hardware device, the write is performed immediately, rather than being cached in a register initially. But that doesn't guarantee that the write is *atomic*, which would be a requirement for safely using it in a multithreaded context. In general, `volatile` is useless for threading. You should use memory barriers instead. Or ensure that your static resources are immutable. – jalf Jan 22 '10 at 16:05
  • `volatile` is also sometimes used for memory-mapped input variables, to ensure that the compiler doesn't optimize it away. I've done that on embedded-systems. – Paul Nathan Jan 22 '10 at 16:22
  • 3
    @jalf: `volatile` can be interpreted as meaning that the value might change externally, which could be very useful in a multithreaded context. Of course, it would be even more useful if it were atomic. – David Thornley Jan 22 '10 at 16:26
  • 3
    `volatile` is sometimes acceptable when you want a quick-n-dirty int/bool check in a loop of a value that some other thread writes, and you don't care about timing or ordering. No need for mutex then. – Macke Jan 22 '10 at 16:37
  • 1
    @David: No, it could be useful in a multithreaded context if it was atomic. Having one guarantee without the other is just useless. Without atomicity there's just no point. @Marcus: I can't imagine any case where ordering isn't relevant in a MT context. What if the bool is set before the data it's meant to guard, due to reordering? – jalf Jan 22 '10 at 16:52
  • 1
    Ultimately, volatile is too aggressive about an invariant we don't really need (that of immediately reading/writing through to memory). We need that *sometimes*, but not necessarily on every read or write. So a mem barrier is a more efficient solution there. But we also need to protect against reordering, and volatile doesn't do that, memory barriers do. So end result: 2-0 to mem barriers. Volatile is just pointless, offering half of what you need, and nothing you don't also get with other primitives. – jalf Jan 22 '10 at 16:54
  • 4
    Given that the question was a beginner in muti-threaded programming, I think volatile should be completely avoided. Even if appropriate for certain situations (I have my doubts), a beginner isn't going to be able to tell when those situations occur – KeithB Jan 22 '10 at 17:37
  • @Ariel: the more elaborate safeguards you put in to your app will often be in direct conflict with the advantages of multithreading it. I suggest a fine balance where you organize conflicting processing inside critical sections and keep these sections as short as possible. – Olof Forshell Jan 26 '11 at 13:49

I am no expert at all in this subject. Just some rule of thumb:

  1. Design for simplicity, bugs really are hard to find in concurrent code even in the simplest examples.
  2. C++ offers you a very elegant paradigm to manage resources(mutex, semaphore,...): RAII. I observed that it is much easier to work with boost::thread than to work with POSIX threads.
  3. Build your code as thread-safe. If you don't do so, your program could behave strangely
  • 44,692
  • 7
  • 66
  • 118
Khaled Alshaya
  • 94,250
  • 39
  • 176
  • 234
  • 6
    Boost threads are a wrapper to POSIX threads on UNIX systems, and Win32 threads on Windows systems. Boost threads is a C++ library and easy to use in C++ applications, where POSIX Threads is a C library and requires a lot of work on your end to play nice with your objects. – Dr. Watson Jan 22 '10 at 15:17
  • 10
    @Mark: the main advangates are portability, and a C++ interface - in particular, RAII classes to manage things like mutex locks. – Mike Seymour Jan 22 '10 at 15:19
  • 1
    @Mark Writing exception-safe code with POSIX is not possible. Unless you manage to write your own classes. This is just an example, adding to what have been said above. – Khaled Alshaya Jan 22 '10 at 15:30
  • I think the question is "how do I design code to be thread safe?" – Max Lybbert Jan 22 '10 at 19:23
  • @Max Thread safe code is the job of the compiler not the programmer. do you mean "correct synchronized access to shared data?" – Khaled Alshaya Jan 22 '10 at 22:11
  • 2
    Another advantage to using boost::thread is that its the basis for the threading code going into the new C++ standard... so code you write using boost::thread will behave similarly and have the same constructs as thread code in the new standard. I also agree with the advantages others have stated... I highly recommend boost::thread for multithreading. – DigitalZebra Jan 22 '10 at 22:12

I am exactly in this situation: I wrote a library with a global lock (many threads, but only one running at a time in the library) and am refactoring it to support concurrency.

I have read books on the subject but what I learned stands in a few points:

  1. think parallel: imagine a crowd passing through the code. What happens when a method is called while already in action ?
  2. think shared: imagine many people trying to read and alter shared resources at the same time.
  3. design: avoid the problems that points 1 and 2 can raise.
  4. never think you can ignore edge cases, they will bite you hard.

Since you cannot proof-test a concurrent design (because thread execution interleaving is not reproducible), you have to ensure that your design is robust by carefully analyzing the code paths and documenting how the code is supposed to be used.

Once you understand how and where you should bottleneck your code, you can read the documentation on the tools used for this job:

  1. Mutex (exclusive access to a resource)
  2. Scoped Locks (good pattern to lock/unlock a Mutex)
  3. Semaphores (passing information between threads)
  4. ReadWrite Mutex (many readers, exclusive access on write)
  5. Signals (how to 'kill' a thread or send it an interrupt signal, how to catch these)
  6. Parallel design patterns: boss/worker, producer/consumer, etc (see schmidt)
  7. platform specific tools: openMP, C blocks, etc

Good luck ! Concurrency is fun, just take your time...

Anna B
  • 5,997
  • 5
  • 40
  • 52
  • 3
    Concurrency fun!!?? Man, you are brave :-) – Ariel Jan 22 '10 at 17:56
  • 2
    @Ariel: it's fun IF you accept to slow down, think and become creative. Like every difficult task, it's fun if you give yourself the time it needs to do things right. – Anna B Jan 22 '10 at 20:34
  • 3
    I agree. I find writing MT code much more interesting than single threaded. –  Jan 22 '10 at 21:07
  • "it's fun if you give yourself the time it needs to do things right" - I couldn't have put it better. The challenges and rewards of getting many threads to work efficiently together and increasing the efficiency of the application makes this a field that's really really interesting to dive deeply into! – Olof Forshell Jan 29 '11 at 05:43

You should read about locks, mutexes, semaphores and condition variables.

One word of advice, if your app has any form of UI make sure you always change it from the UI thread. Most UI toolkits/frameworks will crash (or behave unexpectedly) if you access them from a background thread. Usually they provide some form of dispatching method to execute some function in the UI thread.

  • 11,015
  • 7
  • 46
  • 64
  • Could you possibly elaborate more on the UI aspect? The application does not currently have a GUI, but it's on my list of "to-do-after-everything-else-is-done" items. – Mark Jan 22 '10 at 15:12
  • 2
    There's not much to it. Just that UI frameworks are generally single-threaded to the extent that only one thread is even allowed to interact with the UI. Accessing any part of the UI from another thread is an error. – jalf Jan 22 '10 at 15:15
  • 1
    Most UIs are not thread safe. That is, you can't have a thread altering a control in the GUI and another thread doing something else, even with another window. So you have to specify one thread (usually the main thread) as the only one which can access the GUI, and the other threads have to go through the GUI thread to do anything with the GUI. – Mike DeSimone Jan 22 '10 at 15:17
  • 2
    Other things you should read about are deadlocks, priority inheritance, and race conditions. – Mike DeSimone Jan 22 '10 at 15:19
  • I'm using an open source C++ GUI framework (http://rawmaterialsoftware.com/juce.php) that doesn't complain about calling UI methods from other threads: you just have to use some global lock on the UI message manager thread while doing so. – Anna B Jan 22 '10 at 20:39
  • Gtk+ also provides a method of locking the GUI thread, thus allowing you to run GUI code in separate threads as well. –  Dec 14 '10 at 00:04

Never assume that external APIs are threadsafe. If it is not explicitly stated in their docs, do not call them concurrently from multiple threads. Instead, limit your use of them to a single thread or use a mutex to prevent concurrent calls (this is rather similar to the aforementioned GUI libraries).

Next point is language-related. Remember, C++ has (currently) no well-defined approach to threading. The compiler/optimizer does not know if code might be called concurrently. The volatile keyword is useful to prevent certain optimizations (i.e. caching of memory fields in CPU registers) in multi-threaded contexts, but it is no synchronization mechanism.

I'd recommend boost for synchronization primitives. Don't mess with platform APIs. They make your code difficult to port because they have similar functionality on all major platforms, but slightly different detail behaviour. Boost solves these problems by exposing only common functionality to the user.

Furthermore, if there's even the smallest chance that a data structure could be written to by two threads at the same time, use a synchronization primitive to protect it. Even if you think it will only happen once in a million years.

Alexander Gessler
  • 45,603
  • 7
  • 82
  • 122
  • "C++ has (currently) no well-defined approach to threading" I think this is incorrect. There are compilation flags available to allow multi-threading. Let's face it: if everybody were to go the third-party library or framework path no systems would ever be produced that would get the job done. – Olof Forshell Jan 26 '11 at 13:55
  • Well, individual compiler flags have nothing to do with the language specification. – Alexander Gessler Jan 26 '11 at 18:29
  • If there had been a well-defined approach you would lose a lot of the advantages of raw languages such as C and, to a lesser extent, C++. You want a well-defined approach, buy a third party library (as you propose) and be aware that there will be pros and cons (such as it might not be 100% applicable to what your application needs or may not use the best features of your OS, such as IOCPs in Windows). – Olof Forshell Jan 27 '11 at 08:50

One thing I've found very useful is to make the application configurable with regard to the actual number of threads it uses for various tasks. For example, if you have multiple threads accessing a database, make the number of those threads be configurable via a command line parameter. This is extremely handy when debugging - you can exclude threading issues by setting the number to 1, or force them by setting it to a high number. It's also very handy when working out what the optimal number of threads is.


Make sure you test your code in a single-cpu system and a multi-cpu system.

Based on the comments:-

  • Single socket, single core
  • Single socket, two cores
  • Single socket, more than two cores
  • Two sockets, single core each
  • Two sockets, combination of single, dual and multi core cpus
  • Mulitple sockets, combination of single, dual and multi core cpus

The limiting factor here is going to be cost. Ideally, concentrate on the types of system your code is going to run on.

  • 69,698
  • 10
  • 71
  • 108
  • 1
    Ideally, different numbers of CPUs. Race conditions are hard to find, and running a variety of tasks on a variety of environments could help find them. – David Thornley Jan 22 '10 at 16:09
  • 2
    Ideally, test on > 2 CPUs. For some reason, 2 is a 'stable' number in math and things start getting funky > 2. – Paul Nathan Jan 22 '10 at 16:37
  • 3
    Also note that multi-CPU systems might reveal race conditions that'd never occur on a single CPU, multiple-core system. The added latency in communication between cores can throw things upside down. – jalf Jan 22 '10 at 18:48
  • 1
    A colleague recently discovered a race condition in our code just by running some stuff which we had believed to be completely stable and reliable on our old 8-core Core2 systems on a new 8-core i7 box. The change in execution time exposed a the race. – timday Jan 22 '10 at 21:10
  • Excellent advice, test on both single processor and SMP, then various systems at that. Virtual machines can help a lot here. – Chris O Jan 22 '10 at 22:36

In addition to the other things mentioned, you should learn about asynchronous message queues. They can elegantly solve the problems of data sharing and event handling. This approach works well when you have concurrent state machines that need to communicate with each other.

I'm not aware of any message passing frameworks tailored to work only at the thread level. I've only seen home-brewed solutions. Please comment if you know of any existing ones.


One could use the lock-free queues from Intel's TBB, either as-is, or as the basis for a more general message-passing queue.

Emile Cormier
  • 28,391
  • 15
  • 94
  • 122

Since you are a beginner, start simple. First make it work correctly, then worry about optimizations. I've seen people try to optimize by increasing the concurrency of a particular section of code (often using dubious tricks), without ever looking to see if there was any contention in the first place.

Second, you want to be able to work at as high a level as you can. Don't work at the level of locks and mutexs if you can using an existing master-worker queue. Intel's TBB looks promising, being slightly higher level than pure threads.

Third, multi-threaded programming is hard. Reduce the areas of your code where you have to think about it as much as possible. If you can write a class such that objects of that class are only ever operated on in a single thread, and there is no static data, it greatly reduces the things that you have to worry about in the class.

  • 16,577
  • 3
  • 41
  • 45

A few of the answers have touched on this, but I wanted to emphasize one point: If you can, make sure that as much of your data as possible is only accessible from one thread at a time. Message queues are a very useful construct to use for this.

I haven't had to write much heavily-threaded code in C++, but in general, the producer-consumer pattern can be very helpful in utilizing multiple threads efficiently, while avoiding the race conditions associated with concurrent access.

If you can use someone else's already-debugged code to handle thread interaction, you're in good shape. As a beginner, there is a temptation to do things in an ad-hoc fashion - to use a "volatile" variable to synchronize between two pieces of code, for example. Avoid that as much as possible. It's very difficult to write code that's bulletproof in the presence of contending threads, so find some code you can trust, and minimize your use of the low-level primitives as much as you can.

Mark Bessey
  • 19,598
  • 4
  • 47
  • 69
  • +1 for producer-consumer. It combines data sharing and synchronization into one elegant solution. It works very well when the application follows the data-flow paradigm. – Emile Cormier Jan 26 '10 at 18:27
  • "make sure that as much of your data as possible is only accessible from one thread at a time" I think this is the wrong way to go. I would say concentrate the manipulation of thread-unsafe data to a short critical section and make sure that you have a thread-safe situation outside it. You want your threads going full tilt as much of the time as possible, not blocking each other. – Olof Forshell Jan 26 '11 at 14:08

My top tips for threading newbies:

  • If you possibly can, use a task-based parallelism library, Intel's TBB being the most obvious one. This insulates you from the grungy, tricky details and is more efficient than anything you'll cobble together yourself. The main downside is this model doesn't support all uses of multithreading; it's great for exploiting multicores for compute power, less good if you wanted threads for waiting on blocking I/O.

  • Know how to abort threads (or in the case of TBB, how to make tasks complete early when you decide you didn't want the results after all). Newbies seem to be drawn to thread kill functions like moths to a flame. Don't do it... Herb Sutter has a great short article on this.

  • 24,582
  • 12
  • 83
  • 135

Make sure to explicitly know what objects are shared and how they are shared.

As much as possible make your functions purely functional. That is they have inputs and outputs and no side effects. This makes it much simpler to reason about your code. With a simpler program it isn't such a big deal but as the complexity rises it will become essential. Side effects are what lead to thread-safety issues.

Plays devil's advocate with your code. Look at some code and think how could I break this with some well timed thread interleaving. At some point this case will happen.

First learn thread-safety. Once you get that nailed down then you move onto the hard part: Concurrent performance. This is where moving away from global locks is essential. Figuring out ways to minimize and remove locks while still maintaining the thread-safety is hard.

Matt Price
  • 43,887
  • 9
  • 38
  • 44
  • "Thread safe" functionality usually contains the same kinds of locks and mutexes that you're trying to minimize in your app, there's really nothing magic to them being thread-safe. At a more advanced programming level you take control of as much of the thread-safe issue by calling thread-unsafe functions from inside your own thread-safe area, thereby eliminating several (then) unnecessary locks/unlocks in the process. Conversely, thread-safe functions are called from your thread-unsafe areas. – Olof Forshell Jan 28 '11 at 13:05

Stay away from MFC and it's multithreading + messaging library.
In fact if you see MFC and threads coming toward you - run for the hills (*)

(*) Unless of course if MFC is coming FROM the hills - in which case run AWAY from the hills.

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263

Keep things dead simple as much as possible. It's better to have a simpler design (maintenance, less bugs) than a more complex solution that might have slightly better CPU utilization.

Avoid sharing state between threads as much as possible, this reduces the number of places that must use synchronization.

Avoid false-sharing at all costs (google this term).

Use a thread pool so you're not frequently creating/destroying threads (that's expensive and slow).

Consider using OpenMP, Intel and Microsoft (possibly others) support this extension to C++.

If you are doing number crunching, consider using Intel IPP, which internally uses optimized SIMD functions (this isn't really multi-threading, but is parallelism of a related sorts).

Have tons of fun.

Chris O
  • 5,017
  • 3
  • 35
  • 42
  • +1. Just wanted to write about thread pools! Some years ago, I found that you can create only limited number of threads per process in win32 system. I mean not currently active threads, but threads with empty body. So for fault-tolerant pretending systems thread pools are even essential. Besides, thread pool is easier to debug… – Eugene Jan 23 '10 at 06:38
  • I've worked at company that had a Web product written in C with (at high load) over one thousand threads in a process (executing program, I think we mean the same thing). For this application and with this many threads Win32's thread administration really bogged down. – Olof Forshell Jan 28 '11 at 11:25

The biggest "mindset" difference between single-threaded and multi-threaded programming in my opinion is in testing/verification. In single-threaded programming, people will often bash out some half-thought-out code, run it, and if it seems to work, they'll call it good, and often get away with it using it in a production environment.

In multithreaded programming, on the other hand, the program's behavior is non-deterministic, because the exact combination of timing of which threads are running for which periods of time (relative to each other) will be different every time the program runs. So just running a multithreaded program a few times (or even a few million times) and saying "it didn't crash for me, ship it!" is entirely inadequate.

Instead, when doing a multithreaded program, you always should be trying to prove (at least to your own satisfaction) that not only does the program work, but that there is no way it could possibly not work. This is much harder, because instead of verifying a single code-path, you are effectively trying to verify a near-infinite number of possible code-paths.

The only realistic way to do that without having your brain explode is to keep things as bone-headedly simple as you can possibly make them. If you can avoid using multithreading totally, do that. If you must do multithreading, share as little data between threads as possible, and use proper multithreading primitives (e.g. mutexes, thread-safe message queues, wait conditions) and don't try to get away with half-measures (e.g. trying to synchronize access to a shared piece of data using only boolean flags will never work reliably, so don't try it)

What you want to avoid is the multithreading hell scenario: the multithreaded program that runs happily for weeks on end on your test machine, but crashes randomly, about once a year, at the customer's site. That kind of race-condition bug can be nearly impossible to reproduce, and the only way to avoid it is to design your code extremely carefully to guarantee it can't happen.

Threads are strong juju. Use them sparingly.

Jeremy Friesner
  • 70,199
  • 15
  • 131
  • 234

You should have an understanding of basic systems programing, in particular:

  • Synchronous vs Asynchronous I/O (blocking vs. non-blocking)
  • Synchronization mechanisms, such as lock and mutex constructs
  • Thread management on your target platform
Dr. Watson
  • 3,752
  • 4
  • 32
  • 43
  • why does this guy get +1 and I get -4 when he is basically saying Locks in more words?? – WACM161 Jan 22 '10 at 21:14
  • 1
    @WACM161: *because* he's saying "locks in more words". Because saying "locks" is not helpful, and conveys zero information to someone who's not already familiar with locks. This answer says that you should have an understanding of locks as well as the other listed threading primitives. Yours didn't even say what it is you're supposed to do about locks. From reading your post, it's not clear whether the OP is supposed to understand locks, use locks or just shout "LOCKS!!!" while coding. – jalf Jan 22 '10 at 21:42

I found viewing the introductory lectures on OS and systems programming here by John Kubiatowicz at Berkeley useful.

  • 519
  • 4
  • 4

Part of my graduate study area relates to parallelism.

I read this book and found it a good summary of approaches at the design level.

At the basic technical level, you have 2 basic options: threads or message passing. Threaded applications are the easiest to get off the ground, since pthreads, windows threads or boost threads are ready to go. However, it brings with it the complexity of shared memory.

Message-passing usability seems mostly limited at this point to the MPI API. It sets up an environment where you can run jobs and partition your program between processors. It's more for supercomputer/cluster environments where there's no intrinsic shared memory. You can achieve similar results with sockets and so forth.

At another level, you can use language type pragmas: the popular one today is OpenMP. I've not used it, but it appears to build threads in via preprocessing or a link-time library.

The classic problem is synchronization here; all the problems in multiprogramming come from the non-deterministic nature of multiprograms, which can not be avoided.

See the Lamport timing methods for a further discussion of synchronizations and timing.

Multithreading is not something that only Ph.D.`s and gurus can do, but you will have to be pretty decent to do it without making insane bugs.

Paul Nathan
  • 39,638
  • 28
  • 112
  • 212

Before giving any advice on do's and dont's about multi-thread programming in C++, I would like to ask the question Is there any particular reason you want to start writing the application in C++?

There are other programming paradigms where you utilize the multi-cores without getting into multi-threaded programming. One such paradigm is functional programming. Write each piece of your code as functions without any side effects. Then it is easy to run it in multiple thread without worrying about synchronization.

I am using Erlang for my development purpose. It has increased by productivity by at least 50%. Code running may not be as fast as the code written in C++. But I have noticed that for most of the back-end offline data processing, speed is not as important as distribution of work and utilizing the hardware as much as possible. Erlang provides a simple concurrency model where you can execute a single function in multiple-threads without worrying about the synchronization issue. Writing multi-threaded code is easy, but debugging that is time consuming. I have done multi-threaded programming in C++, but I am currently happy with Erlang concurrency model. It is worth looking into.

  • 2,699
  • 3
  • 25
  • 42

I'm in the same boat as you, I am just starting multi threading for the first time as part of a project and I've been looking around the net for resources. I found this blog to be very informative. Part 1 is pthreads, but I linked starting on the boost section.

  • 380
  • 1
  • 4
  • 15

I have written a multithreaded server application and a multithreaded shellsort. They were both written in C and use NT's threading functions "raw" that is without any function library in-between to muddle things. They were two quite different experiences with different conclusions to be drawn. High performance and high reliability were the main priorities although coding practices had a higher priority if one of the first two was judged to be threatened in the long term.

The server application had both a server and a client part and used iocps to manage requests and responses. When using iocps it is important never to use more threads than you have cores. Also I found that requests to the server part needed a higher priority so as not to lose any requests unnecessarily. Once they were "safe" I could use lower priority threads to create the server responses. I judged that the client part could have an even lower priority. I asked the questions "what data can't I lose?" and "what data can I allow to fail because I can always retry?" I also needed to be able to interface to the application's settings through a window and it had to be responsive. The trick was that the UI had normal priority, the incoming requests one less and so on. My reasoning behind this was that since I will use the UI so seldom it can have the highest priority so that when I use it it will respond immediately. Threading here turned out to mean that all separate parts of the program in the normal case would/could be running simultaneously but when the system was under higher load, processing power would be shifted to the vital parts due to the prioritization scheme.

I've always liked shellsort so please spare me from pointers about quicksort this or that or blablabla. Or about how shellsort is ill-suited for multithreading. Having said that, the problem I had had to do with sorting a semi-largelist of units in memory (for my tests I used a reverse-sorted list of one million units of forty bytes each. Using a single-threaded shellsort I could sort them at a rate of roughly one unit every two us (microseconds). My first attempt to multithread was with two threads (though I soon realized that I wanted to be able to specify the number of threads) and it ran at about one unit every 3.5 seconds, that is to say SLOWER. Using a profiler helped a lot and one bottleneck turned out to be the statistics logging (i e compares and swaps) where the threads would bump into each other. Dividing up the data between the threads in an efficient way turned out to be the biggest challenge and there is definitley more I can do there such as dividing the vector containing the indeces to the units in cache-line size adapted chunks and perhaps also comparing all indeces in two cache lines before moving to the next line (at least I think there is something I can do there - the algorithms get pretty complicated). In the end, I achieved a rate of one unit every microsecond with three simultaneous threads (four threads about the same, I only had four cores available).

As to the original question my advice to you would be

  1. If you have the time, learn the threading mechanism at the lowest possible level.
  2. If performance is important learn the related mechanisms that the OS provides. Multi-threading by itself is seldom enough to achieve an application's full potential.
  3. Use profiling to understand the quirks of multiple threads working on the same memory.
  4. Sloppy architectural work will kill any app, regardless of how many cores and systems you have executing it and regardless of the brilliance of your programmers.
  5. Sloppy programming will kill any app, regardless of the brilliance of the architectural foundation.
  6. Understand that using libraries lets you reach the development goal faster but at the price of less understanding and (usually) lower performance .
Olof Forshell
  • 3,169
  • 22
  • 28

Make sure you know what volatile means and it's uses(which may not be obvious at first).

Also, when designing multithreaded code, it helps to imagine that an infinite amount of processors is executing every single line of code in your application at once. (er, every single line of code that is possible according to your logic in your code.) And that everything that isn't marked volatile the compiler does a special optimization on it so that only the thread that changed it can read/set it's true value and all the other threads get garbage.

  • 62,085
  • 98
  • 303
  • 499
  • 1
    Way to mislead the OP. `volatile` has *nothing* to do with multithreading. It is not intended for multithreading, and it has no properties relevant to multithreading. – jalf Jan 23 '10 at 01:55