So far, multithreading may sound like a free lunch, but as we all know there is no such thing. Let's look at a couple of the downsides of multithreading.
One downside is that thread switching consumes resources. Each time the CPU switches from one thread to another, the state of the current thread must be saved, so it can be resumed later, and the state of the new thread must be retrieved and loaded into the CPU registers. Just adding one extra thread to improve responsiveness is not going to cause any noticeable problems, but if you use, say, 20 threads in a complex statistical analysis program, the extra overhead of task switching may nullify the theoretical advantages of multithreading.
The second downside is that programming multiple threads can be complex. One problem that faces programmers is called a deadlock, when two threads are each waiting for the other to complete, resulting in neither thread ever completing. A related problem, called a race, results when the program's operation depends on which of two or more threads completes first. While .Net provides classes and tools to deal with these and other problems of thread management and synchronization, it remains a complex business.
The bottom line, for me at least, is to use multithreading only when there is a clear advantage to doing so, and then to use as few threads as possible. The good news is that these ugly problems are not a factor when you use multithreading to improve program responsiveness.