- Item 27: Use Async Methods for Async Work
- Item 28: Never Write async void Methods
- Item 29: Avoid Composing Synchronous and Asynchronous Methods
- Item 30: Use Async Methods to Avoid Thread Allocations and Context Switches
- Item 31: Avoid Marshalling Context Unnecessarily
- Item 32: Compose Asynchronous Work Using Task Objects
- Item 33: Consider Implementing the Task Cancellation Protocol
- Item 34: Cache Generalized Async Return Types
Task-based asynchronous programming provides new idioms for composing applications from asynchronous building blocks. In this excerpt, learn eight techniques from renowned C# expert Bill Wagner to make your work easier than ever before.
Save 35% off the list price* of the related book or multi-format eBook (EPUB + MOBI + PDF) with discount code ARTICLE.
* See informit.com/terms
Many of our programming tasks involve starting and responding to asynchronous work. We work on distributed programs, running on multiple machines or virtual machines. Many applications span threads, processes, containers, virtual machines, or physical machines. Asynchronous programming, however, is not synonymous with multithreaded programming. Modern programming means mastering asynchronous work. That work may include awaiting the next network packet, or awaiting user input.
The C# language, along with some classes in the .NET Framework, provides tools that make asynchronous programming easier. Asynchronous programming can be challenging, but when you remember a few important practices, it becomes easier than it has ever been.
Item 27: Use Async Methods for Async Work
Async methods offer an easier way to construct asynchronous algorithms. You write the core logic for an asynchronous method as though it were a synchronous method. However, the execution path is not the same as a synchronous method. That is, with a synchronous method, you write sequences of instructions and expect those instructions to execute in order, in the same way you wrote them. That’s not necessarily the case with async methods. Async methods may return before executing all the logic you wrote. Then, at some later time in response to a task completing, the method picks up execution where it left off, while your program has continued in its normal flow. If you have no understanding of this process, it can seem like magic. With a little understanding, it may seem very confusing and generate more questions than it answers. Read on so that you can fully understand how the compiler transforms your code into async methods. You’ll learn how to analyze async code by appreciating the core algorithms it describes, and gain the skills to understand how the code executes as it follows through those instructions and tasks.
Let’s start with the simplest example: an async method that actually executes synchronously. Consider this method:
static async Task SomeMethodAsync()
Task awaitable = SomeMethodReturningTask();
Console.WriteLine("In SomeMethodAsync, before the await");
var result = await awaitable;
Console.WriteLine("In SomeMethodAsync, after the await");
In some cases, the asynchronous work may complete before the first task is awaited. The library designer may have designed a cache, and you may be retrieving a value already loaded there. When you are awaiting the initial task, the task is completed and execution continues synchronously on the next instruction. The remainder of the method executes to completion, and the result is packaged in a Task object and returned. Everything happens synchronously. When this method returns, it returns a completed Task, and the caller also continues synchronously while it is awaiting the completion of this task. So far, this process should be familiar to any developer.
But what happens in that same method if the result isn’t available when the task is awaited? Then, the flow of control becomes more complicated. Before async and await language support, you’d have to configure some callback to process the return from an asynchronous task. This could take the form of either an event handler or a delegate of some kind. Now, it’s much easier. To explore the asynchronous processing, let’s first look at what happens conceptually, without any concern for how the language implements this behavior.
When the await instruction is reached, the method returns. It returns a Task object that indicates the asynchronous work has not yet completed. Here’s where the magic happens: When the awaited task completes, this method continues execution on the next instruction after the await. It continues to do its work, and upon completion of that work, updates the Task object that it returned earlier with the completed result. This task now notifies any code awaiting it that it has completed. Those code segments can then also continue from the point where they were interrupted by awaiting this task.
The best way to explore the control flow during this process is to walk through some samples in a debugger. Step through code with await expressions and see how the execution flow proceeds.
You might also find an analogy with real-world asynchronous tasks useful. Consider the tasks for creating a homemade pizza. You start by synchronously making the dough. Then, you can start an asynchronous process to let the dough rise. After that task is under way, you can continue to make the sauce. Once you’ve made the sauce, you can await the completion of the dough rising task. Then, you can start an asynchronous task to heat up the oven. While it starts, you can assemble the pizza. Finally, after waiting for the oven to reach the correct temperature, you put the pizza in the oven to cook.
Now, let’s remove the magic by explaining how this process is implemented. When the compiler processes an async method, it builds mechanisms to start asynchronous work, and continue further instructions when that async work has completed. The interesting changes occur in the await expression. The compiler builds data structures and delegates its work so that execution can continue at the next instruction following the await expression. The data structures ensure that the values of all local variables are preserved. The compiler configures a continuation based on the awaited task, such that the continuation jumps back into the method in the same location when the task is completed. Effectively, the compiler generates a delegate for the code that follows the await expression. The compiler writes state information to ensure that when the awaited task completes, the delegate is invoked.
When the awaited task completes, it raises the event that indicates it has completed. This method is re-entered and the state is restored. The code appears to pick up where it left off—that is, the state is restored and execution jumps to the appropriate location. This is similar to what happens when execution continues after a synchronous call: The state is set for that method, and execution continues at the point following the method call. When the remainder of the method executes, it completes its work, updates the previously returned Task object, and raises the events that completed.
When the task completes, the notification mechanism calls the async method and it continues execution. The SynchronizationContext class is responsible for implementing this behavior. This class ensures that when an asynchronous method resumes after an awaited task completes, the environment and the context are compatible with the state when the awaited task paused. Effectively, the context “brings you back where you were.” The compiler generates the code to use the SynchronizationContext to bring you back to the desired state. Before an async method begins, the compiler caches the current SynchronizationContext, using the static Current property. When the awaited task resumes, the compiler posts the remaining code as a delegate to that same SynchronizationContext. The SynchronizationContext schedules the work using the appropriate means for the environment. In a GUI application, the SynchronizationContext uses the Dispatcher to schedule the work (see Item 37). In a Web context, the SynchronizationContext uses the thread pool and QueueUserWorkItem (see Item 35). In a console application, where there is no SynchronizationContext, the work continues on the current thread. Notice that some contexts have multiple threads, whereas others have single threads and schedule work cooperatively.
If the awaited asynchronous task has faulted, the exception that faulted the task is thrown in the code posted to the SynchronizationContext. The exception is thrown when that continuation executes. As a consequence, tasks that are not awaited will not have any exceptions observed when they have faulted. Their continuations are not scheduled, and the exception was caught, but never rethrown in the SynchronizationContext. For that reason, it’s always important to await any tasks you start: it’s the best way to observe any exceptions that are thrown from the asynchronous work.
This same strategy is extended further when methods have multiple await expressions. Each await expression may cause the async method to return to the caller with the task still uncompleted. The internal state is updated so that when the routine is continued again, execution begins at the correct spot. As when there is only a single await expression, the synchronization context determines how the remaining work is scheduled: either on the single thread in the context or on a different thread.
The language writes the same kind of code that you would write to register for notifications when asynchronous work completes. It does so in a standard manner, which makes it easier to read the code as though it was synchronous.
The path described up to this point assumes that all asynchronous work completes successfully. Of course, that doesn’t always happen. Sometimes, exceptions are thrown. Async methods must handle those conditions as well. That necessity complicates control flow, because an async method may have returned to its caller before completing all its work. It must somehow inject any exceptions into the call stack. Inside an async method, the compiler generates a try/catch block that catches all exceptions. Any and all exceptions are stored in an AggregateException that is a member of the Task object, and the Task is set to the faulted state. When a faulted Task is awaited, the await expression throws the first exception in the aggregate exception object. In the most common case, there is only one exception, and it is thrown in the caller’s context. If there are multiple exceptions, the caller must unpackage the aggregate exception and examine each (see Item 34).
This asynchronous mechanism can be overridden by using certain Task APIs. If you really must wait for a Task to complete, you can call the Task.Wait() API, or you can examine the Task<T>.Result property. Either of those will block until all asynchronous work has completed, which can be useful for a Main() method in a console application. Item 35 describes how these APIs can cause deadlocks and why they should be avoided.
The compiler doesn’t perform magic when you create asynchronous methods using the async and await keywords. Instead, it does a lot of work to generate a lot of code to handle continuations, error reporting, and resuming methods. The benefit of all this compiler manipulation is that work appears to pause when asynchronous work is not completed. It resumes when that asynchronous work is ready. This pause can travel as far up the call stack as needed, as long as Task objects are awaited. The magic works well, unless you override it.