Register your product to gain access to bonus material or receive a coupon.
“When you begin using multi-threading throughout an application, the importance of clean architecture and design is critical. . . . This places an emphasis on understanding not only the platform’s capabilities but also emerging best practices. Joe does a great job interspersing best practices alongside theory throughout his book.”
– From the Foreword by Craig Mundie, Chief Research and Strategy Officer, Microsoft Corporation
Author Joe Duffy has risen to the challenge of explaining how to write software that takes full advantage of concurrency and hardware parallelism. In Concurrent Programming on Windows, he explains how to design, implement, and maintain large-scale concurrent programs, primarily using C# and C++ for Windows.
Duffy aims to give application, system, and library developers the tools and techniques needed to write efficient, safe code for multicore processors. This is important not only for the kinds of problems where concurrency is inherent and easily exploitable—such as server applications, compute-intensive image manipulation, financial analysis, simulations, and AI algorithms—but also for problems that can be speeded up using parallelism but require more effort—such as math libraries, sort routines, report generation, XML manipulation, and stream processing algorithms.
Concurrent Programming on Windows has four major sections: The first introduces concurrency at a high level, followed by a section that focuses on the fundamental platform features, inner workings, and API details. Next, there is a section that describes common patterns, best practices, algorithms, and data structures that emerge while writing concurrent software. The final section covers many of the common system-wide architectural and process concerns of concurrent programming.
This is the only book you’ll need in order to learn the best practices and common patterns for programming with concurrency on Windows and .NET.
Concurrent Programming on Windows: Synchronization and Time
Foreword xix
Preface xxiii
Acknowledgments xxvii
About the Author xxix
Part I: Concepts 1
Chapter 1: Introduction 3
Why Concurrency? 3
Program Architecture and Concurrency 6
Layers of Parallelism 8
Why Not Concurrency? 10
Where Are We? 11
Chapter 2: Synchronization and Time 13
Managing Program State 14
Synchronization: Kinds and Techniques 38
Where Are We? 73
Part II: Mechanisms 77
Chapter 3: Threads 79
Threading from 10,001 Feet 80
The Life and Death of Threads 89
Where Are We? 124
Chapter 4: Advanced Threads 127
Thread State 127
Inside Thread Creation and Termination 152
Thread Scheduling 154
Where Are We? 180
Chapter 5: Windows Kernel Synchronization 183
The Basics: Signaling and Waiting 184
Using the Kernel Objects 211
Where Are We? 251
Chapter 6: Data and Control Synchronization 253
Mutual Exclusion 255
Reader/Writer Locks (RWLs) 287
Condition Variables 304
Where Are We? 312
Chapter 7: Thread Pools 315
Thread Pools 101 316
Windows Thread Pools 323
CLR Thread Pool 364
Performance When Using the Thread Pools 391
Where Are We? 398
Chapter 8: Asynchronous Programming Models 399
Asynchronous Programming Model (APM) 400
Event-Based Asynchronous Pattern 421
Where Are We? 427
Chapter 9: Fibers 429
An Overview of Fibers 430
Using Fibers 435
Additional Fiber-Related Topics 445
Building a User-Mode Scheduler 453
Where Are We? 473
Part III: Techniques 475
Chapter 10: Memory Models and Lock Freedom 477
Memory Load and Store Reordering 478
Hardware Atomicity 486
Memory Consistency Models 506
Examples of Low-Lock Code 520
Where Are We? 541
Chapter 11: Concurrency Hazards 545
Correctness Hazards 546
Liveness Hazards 572
Where Are We? 609
Chapter 12: Parallel Containers 613
Fine-Grained Locking 616
Lock Free 632
Coordination Containers 640
Where Are We? 654
Chapter 13: Data and Task Parallelism 657
Data Parallelism 659
Task Parallelism 684
Message-Based Parallelism 719
Cross-Cutting Concerns 720
Where Are We? 732
Chapter 14: Performance and Scalability 735
Parallel Hardware Architecture 736
Speedup: Parallel vs. Sequential Code 756
Spin Waiting 767
Where Are We? 781
Part IV: Systems 783
Chapter 15: Input and Output 785
Overlapped I/O 786
I/O Cancellation 822
Where Are We? 826
Chapter 16: Graphical User Interfaces 829
GUI Threading Models 830
.NET Asynchronous GUI Features 837
Where Are We? 860
Part V: Appendices 863
Appendix A: Designing Reusable Libraries for Concurrent .NET Programs 865
The 20,000-Foot View 866
The Details 867
Appendix B: Parallel Extensions to .NET 887
Task Parallel Library 888
Parallel LINQ 910
Synchronization Primitives 915
Concurrent Collections 924
Index 931