- By Bart De Smet
- May 30, 2013
This chapter is from the book
Your First Application: Take Two
To continue our tour through Visual Studio 2012, let’s make things a bit more concrete and redo our little Hello C# application inside the IDE.
New Project Dialog
The starting point to create a new application is the New Project dialog, which can be found through File, New, Project or invoked by Ctrl+Shift+N. A link is available from the Projects tab on the Start Page, too. A whole load of different project types are available, also depending on the edition used and the installation options selected. Actually, the number of available project templates has grown so much over the years that the dialog was redesigned in Visual Studio 2010 to include features such as search.
Because I’ve selected Visual C# as my preferred language at the first start of Visual Studio 2012, the C# templates are shown immediately. (For other languages, scroll down to the Other Languages entry on the left.) Subcategories are used to organize the various templates. Under the Windows category, we find the following commonly used project types:
- Console Application is used to create command-line application executables. This is what we’ll use for our Hello application.
- Class Library provides a way to build assemblies with a .dll extension that can be used from various other applications (for example, to provide APIs).
- Portable Class Library is new in Visual Studio 2012 and is used to create class libraries that can run on multiple .NET Framework flavors (such as Silverlight, Windows Phone, .NET 4.5, and so on).
- Windows Forms Application creates a project for a GUI-driven application based on the Windows Forms technology, targeting the classic Windows desktop.
- WPF Application is another template for GUI applications but based on the new and more powerful WPF framework, also targeting the classic Windows desktop.
Visual Studio 2012 adds the Windows Store category with templates used to build applications targeting the Windows 8 platform:
- Different XAML templates are available as starting points for the GUI design of a Windows Store application (for example, using a grid or a split view).
- Class Library (Windows Store apps) gives you a way to build class library assemblies that you can reuse across different Windows Store app projects.
- When you are writing web applications, the Web category is a good starting point, providing different templates for ASP.NET-based applications.
We cover other types of templates, too, but for now those are the most important ones to be aware of. Figure 3.22 shows the New Project dialog, where you pick the project type of your choice.
Notice the NET Framework 4.5 drop-down at the top of the dialog. This is where the multitargeting support of Visual Studio comes in. In this list, you can select to target older versions of the framework, all the way back to 2.0. Give it a try and select the 2.0 version of the framework to see how the dialog filters out project types that are not supported on that version of the framework.
For now, keep .NET Framework 4.5 selected, mark the Console Application template, and specify Hello as the name for the project. Notice the Create Directory for Solution check box. Stay tuned. We’ll get to the concept of projects and solutions in a while. Just leave it as is for now. Figure 3.23 shows the result of creating the new project.
Once the project has been created, it is loaded, and the first (and in this case, only) relevant file of the project shows up. In our little console application, this is the Program.cs file containing the managed code entry point.
Notice how an additional toolbar (known as the Text Editor toolbar), extra toolbar items (mainly for debugging), and menus have been made visible based on the context we’re in now.
With the new project created and loaded, make the Solution Explorer (usually docked on the right side) visible, as shown in Figure 3.24. Slightly simplified, Solution Explorer is a mini file explorer that shows all the files that are part of the project. In this case, that’s just Program.cs. Besides the files in the project, other nodes are shown as well:
- Properties provides access to the project-level settings (see later) and reveals a code file called AssemblyInfo.cs that contains assembly-level attributes, something we discuss in Chapter 25.
- References is a collection of assemblies the application depends on. Notice that by default quite a few references to commonly used class libraries are added to the project, also depending on the project type.
Figure 3.24. Solution Explorer.
So, what’s the relation between a solution and a project? Fairly simple: Solutions are containers for one or more projects. In our little example, we have just a single Console Application project within its own solution. The goal of solutions is to be able to express relationships between dependent projects. For example, a Class Library project might be referred to by a Console Application in the same solution. Having them in the same solution makes it possible to build the whole set of projects all at once.
Although we don’t need to reconfigure project properties at this point, let’s take a quick look at the project configuration system. Double-click the Properties node for our Hello project in Solution Explorer (or right-click and select Properties from the context menu). Figure 3.25 shows the Build tab in the project settings.
As a concrete example of some settings, I’ve selected the Build tab on the left, but feel free to explore the other tabs at this point. The reason I’m highlighting the Build configuration at this point is to stress the relationship between projects and build support, as will be detailed later on.
Time to take a look at the center of our development activities: writing code. Switch back to Program.cs and take a look at the skeleton code that has been provided:
static void Main(string args)
There are a few differences with the code we started from when writing our little console application manually.
First of all, more namespaces with commonly used types have been imported by means of using directives. Second, a namespace declaration is generated to contain the Program class. We talk about namespaces in more detail in the next chapters, so don’t worry about this for now. Finally, the Main entry point has a different signature: Instead of not taking in any arguments, it now does take in a string array that will be populated with command-line arguments passed to the resulting executable. Because we don’t really want to use command-line arguments, this doesn’t matter much to us. We discuss the possible signatures for the managed code entry point in Chapter 4, “Language Essentials,” in the section “The Entry Point.”
Let’s write the code for our application now. Recall the three lines we wrote earlier:
static void Main()
Console.Write("Enter your name: ");
string name = Console.ReadLine();
Console.WriteLine("Hello " + name);
As you enter this code in the editor, you’ll observe a couple of things. One little feature is auto-indentation, which positions the cursor inside the Main method indented a bit more to the right than the opening curly brace of the method body. This enforces good indentation practices (the behavior of which you can control through the Tools, Options dialog). More visible is the presence of IntelliSense. As soon as you type the member lookup dot operator after the Console type name, a list of available members appears that filters out as you type. Figure 3.26 shows IntelliSense in action.
After you’ve selected the Write method from the list (note you can press Enter or the spacebar as soon as the desired member is selected in the list to complete it further) and you type the left parenthesis to supply the arguments to the method call, IntelliSense pops up again showing all the available overloads of the method. You learn about overloading in Chapter 10, “Methods,” so just type the “Enter your name:” string.
IntelliSense will help you with the next two lines of code in a similar way as it did for the first. As you type, notice different tokens get colorized differently. Built-in language keywords are marked with blue, type names (like Console) have a color that I don’t know the name of but that looks kind of lighter bluish, and string literals are colored with a red-brown color. Actually, you can change all those colors through the Tools, Options dialog.
Figure 3.27. Reducing the clutter of excessive imported namespaces.
Another great feature about the code editor is its background compilation support. As you type, a special C# compiler is running constantly in the background to spot code defects early. Suppose we have a typo when referring to the name variable; it will show up almost immediately, marked by red squiggles, as shown in Figure 3.28.
If you’re wondering what the yellow border on the left side means, it simply indicates the lines of code you’ve changed since the file was opened and last saved. If you press Ctrl+S to save the file now, you’ll see the lines marked green. This feature helps you find code you’ve touched in a file during a coding session by providing a visual cue, which is quite handy if you’re dealing with large code files.
As software complexity grows, so does the build process: Besides the use of large numbers of source files, extra tools are used to generate code during a build, references to dependencies need to be taken care of, resulting assemblies must be signed, and so on. You probably don’t need further convincing that having integrated build support right inside the IDE is a great thing.
In Visual Studio, build is integrated tightly with the project system because that’s ultimately the place where source files live, references are added, and properties are set. To invoke a build process, either use the Build menu (see Figure 3.29) or right-click the solution or a specific project node in Solution Explorer. A shortcut to invoke a build for the entire solution is F6.
Figure 3.29. Starting a build from the project node context menu.
Behind the scenes, this build process figures out which files need to compile, which additional tasks need to be run, and so on. Ultimately, calls are made to various tools such as the C# compiler. This is not a one-way process: Warnings and errors produced by the underlying tools are bubbled up through the build system into the IDE, allowing for a truly interactive development experience. Figure 3.30 shows the Error List pane in Visual Studio 2012.
Figure 3.30. The Error List pane showing a build error.
Starting with Visual Studio 2005, the build system is based on a .NET Framework technology known as MSBuild. One of the rationales for this integration is to decouple the concept of project files from exclusive use in Visual Studio. To accomplish this, the project file (for C#, that is a file with a .csproj extension) serves two goals: It’s natively recognized by MSBuild to drive build processes for the project, and Visual Studio uses it to keep track of the project configuration and all the files contained in it.
To illustrate the project system, right-click the project node in Solution Explorer and choose Unload Project. Next, select Edit Hello.csproj from the same context menu (see Figure 3.31).
Figure 3.31. Showing the project definition file.
In Figure 3.32, I’ve collapsed a few XML nodes in the XML editor that is built into Visual Studio. As you can see, the IDE is aware of many file formats. Also notice the additional menus and toolbar buttons that have been enabled as we’ve opened an XML file.
From this, we can see that MSBuild projects are XML files that describe the structure of the project being built: what the source files are, required dependencies, and so forth. Visual Studio uses MSBuild files to store a project’s structure and to drive its build. Notable entries in this file include the following:
- The Project tag specifies the tool version (in this case, version 4.0 of the .NET Framework tools, including MSBuild itself), among other build settings.
- PropertyGroups are used to define name-value pairs with various project-level configuration settings.
- ItemGroups contain a variety of items, such as references to other assemblies and the files included in the project.
- Using an Import element, a target file is specified that contains the description of how to build certain types of files (for example, using the C# compiler).
You’ll rarely touch up project files directly using the XML editor. However, for advanced scenarios, it’s good to know it’s there.
Now that you know how to inspect the MSBuild project file, go ahead and choose Reload Project from the project’s node context menu in Solution Explorer. Assuming a successful build (correct the silly typo illustrated before), where can the resulting binaries be found? Have a look at the project’s folder, where you’ll find a subfolder called bin. Underneath this one, different build flavors have their own subfolder. Figure 3.33 shows the Debug build output.
For now, we’ve just built one particular build flavor: Debug. Two build flavors, more officially known as solution configurations, are available by default. In Debug mode, symbol files with additional debugging information are built. In Release mode, that’s not the case, and optimizations are turned on, too. This is just the default configuration, though: You can tweak settings and even create custom configurations altogether. Figure 3.34 shows the drop-down list where the active project build flavor can be selected.
One of the biggest advantages of the MSBuild technology is that a build can be done without the use of Visual Studio or other tools. In fact, MSBuild ships with the .NET Framework itself. Therefore, you can take any Visual Studio project (since version 2005, to be precise) and run MSBuild directly on it. That’s right: Visual Studio doesn’t even need to be installed. Not only does this allow you to share your projects with others who might not have the IDE installed, but it also makes automated build processes possible (for example, by TFS). Because you can install TFS on client systems nowadays, automated (that is, nightly) build of personal projects becomes available for individual professional developers, too.
In fact, MSBuild is nothing more than a generic build task execution platform that has built-in notions of dependency tracking and timestamp checking to see what parts of an existing build are out of date (to facilitate incremental, and hence faster, builds). The fact it can invoke tools such as the C# compiler is because the right configuration files, so-called target files, are present that declare how to run the compiler. Being written in managed code, MSBuild can also be extended easily. See the MSDN documentation on the subject for more information.
To see a command-line build in action, open a Developer Command Prompt for VS2012 from the Start menu, change the directory to the location of the Hello.csproj file, and invoke msbuild.exe (see Figure 3.35). The fact there’s only one recognized project file extension causes MSBuild to invoke the build of that particular project file.
Because we already invoked a build through Visual Studio for the project before, all targets are up-to-date, and the incremental build support will avoid rebuilding the project altogether.
One of the first features that found a home under the big umbrella of the IDE concept was integrated debugging support on top of the editor. This is obviously no different in Visual Studio 2012, with fabulous debugging support facilities that you’ll live and breathe on a day-to-day basis as a professional developer on the .NET Framework.
The most commonly used debugging technique is to run the code with breakpoints set at various places of interest, as shown in Figure 3.36. Doing so right inside a source code file is easy by putting the cursor on the line of interest and pressing F9. Alternative approaches include clicking in the gray margin on the left or using any of the toolbar or menu item options to set breakpoints.
To start a debugging session, press F5 or click the button with the VCR Play icon. (Luckily, Visual Studio is easier to program than such an antique and overly complicated device.) Code will run until a breakpoint is encountered, at which point you’ll break in the debugger, as illustrated in Figure 3.37.
Notice a couple of the debugging facilities that have become available as we entered the debugging mode:
- The Call Stack pane shows where we are in the execution of the application code. In this simple example, there’s only one stack frame for the Main method, but in typical debugging sessions, call stacks get much deeper. By double-clicking entries in the call stack list, you can switch back and forth between different stack frames to inspect the state of the program.
- The Locals pane shows all the local variables that are in scope, together with their values. More complex object types will result in more advanced visualizations and the ability to drill down into the internal structure of the object kept in a local variable. Also, when hovering over a local variable in the editor, its current value is shown to make inspection of program state much easier.
- The Debug toolbar has become visible, providing options to continue or stop execution and step through code in various ways: one line at a time, stepping into or over methods calls, and so on.
More advanced uses of the debugger are sprinkled throughout this book, but nevertheless let’s highlight a few from a 10,000-foot view:
- The Immediate window enables you to evaluate expressions, little snippets of code. This way, you can inspect more complex program state that might not be immediately apparent from looking at individual variables. For example, you could execute a method to find out about state in another part of the system.
- The Breakpoints window simply displays all breakpoints currently set and provides options for breakpoint management: the ability to remove breakpoints or enable/disable them.
- The Memory window and Registers window are more advanced means of looking at the precise state of the machine by inspecting memory or processor registers. In the world of managed code, you won’t use those very often.
- The Disassembly window can be used to show the processor instructions executed by the program. Again, in the world of managed code this is less relevant (recall the role of the Just-in-Time [JIT] compiler), but all in all the Visual Studio debugger is usable for both managed and native code debugging.
- The Threads window shows all the threads executing in a multithreaded application. Since .NET Framework 4, new concurrency libraries have been added to System.Threading and new Parallel Stacks and Parallel Tasks windows have been added to assist in debugging those, too.
Debugging is not necessarily initiated by running the application straight from inside the editor. Instead, you can attach to an already running process, even on a remote machine, using the Remote Debugger.
Visual Studio 2010 introduced the IntelliTrace feature, which enables a time-travel mechanism to inspect the program’s state at an earlier point in the execution (for example, to find out about some state corruption that happened long before a breakpoint was hit).
With the .NET Framework class libraries ever growing and other libraries being used in managed code applications, the ability to browse through available libraries becomes quite important. You’ve already seen IntelliSense as a way to show available types and their available members, but for more global searches, different visualizations are desirable. Visual Studio’s built-in Object Browser is one such tool (see Figure 3.38).
Figure 3.38. Object Browser visualizing the System.Core assembly.
This tool feels a lot like ILSpy, with the ability to add assemblies for inspection, browse namespaces, types, and members, and a way to search across all of those. It doesn’t have decompilation support, though.
An all-important set of features that form an integral part of IDE functionality today is what we can refer to collectively as “code insight” features. No matter how attractive the act of writing code may look—because that’s what we, developers, are so excited about, aren’t we?—the reality is we spend much more time reading existing code in an attempt to understand it, debug it, or extend it. Therefore, the ability to look at the code from different angles is an invaluable asset to modern IDEs.
To start with, three closely related features are directly integrated with the code editor through the context menu, shown in Figure 3.39. These enable navigating through source code in a very exploratory fashion.
Go To Definition simply navigates to the place where the highlighted “item” is defined. This could be a method, field, local variable, and so on. We talk about the meaning of those terms in the next few chapters.
Find All References is similar in nature but performs the opposite operation: Instead of finding the definition site for the selection, it looks for all use sites of it. For example, when considering changing the implementation of some method, you better find out who’s using it and what the impact of any change might be.
View Call Hierarchy was added in Visual Studio 2010 and somewhat extends upon the previous two in that it presents the user with a hierarchical representation of outgoing and incoming calls for a selected member. Figure 3.40 shows navigation through some call hierarchy.
So far, we’ve been looking at code with a fairly local view: hopping between definitions, tracing references, and drilling into a hierarchy of calls. Often, you want to get a more global view of the code to understand the bigger picture. Let’s zoom out gradually and explore more code exploration features that make this task possible.
Another addition in Visual Studio 2010 was the support for sequence diagrams, which can be generated using Generate Sequence Diagram from the context menu in the code editor. People familiar with UML notation will immediately recognize the visualization of sequence diagrams. They enable you to get an ordered idea of calls being made between different components in the system, visualizing the sequencing of such an exchange.
Notice that the sequence diagrams in Visual Studio are not passive visualizations. Instead, you can interact with them to navigate to the corresponding code if you want to drill down into an aspect of the implementation. This is different from classic UML tools where the diagrams are not tightly integrated with an IDE. Figure 3.41 shows a sequence diagram of calls between components.
To look at a software project from a more macroscopic scale, you can use the Class Diagram feature in Visual Studio, available since version 2008. To generate such a diagram, right-click the project node in Solution Explorer and select View Class Diagram. The Class Diagram feature provides a graphical veneer on top of the project’s code, representing the defined types and their members, as well as the relationships between those types (such as object-oriented inheritance relationships, as discussed in Chapter 14, “Object-Oriented Programming”).
Once more, this diagram visualization is interactive, which differentiates it from classical approaches to diagramming of software systems. In particular, the visualization of the various types and their members is kept in sync with the underlying source code so that documentation never diverges from the actual implementation. But there’s more. Besides visualization of existing code, you can use the Class Diagram feature to extend existing code or even to define whole new types and their members. Using Class Diagrams you can do fast prototyping of rich object models using a graphical designer. Types generated by the designer will have stub implementations of methods and such, waiting for code to be supplied by the developer at a later stage. Figure 3.42 shows the look and feel of the Class Diagram feature.
Figure 3.42. A class diagram for a simple type hierarchy.
Other ways of visualizing the types in a project exist. We’ve already seen the Object Browser as a way to inspect arbitrary assemblies and search for types and their members. In addition to this, there’s the Class View window that restricts the view to the projects in the current solution. A key difference is this tool’s noninteractive nature: It’s a one-way visualization of types.
Finally, to approach a solution from a high-level view, there’s the Architecture Explorer (illustrated in Figure 3.43). This one can show the various projects in a solution and the project items they contain, and you can drill down deeper into the structure of those items (for example, types and members). By now, it should come as no surprise this view on the world is kept in sync with the underlying implementation, and the designer can be used to navigate to the various items depicted. What makes this tool unique is its rich analysis capabilities, such as the ability to detect and highlight circular references, unused references, and so on.
Figure 3.43. Graph view for the solution, project, a code file item, and some types.
During the installation of Visual Studio 2012, I suggested that you install the full MSDN documentation locally using the Manage Help Settings utility. Although this is not a requirement, it’s convenient to have a wealth of documentation about the tools, framework libraries, and languages at your side at all times.
Although you can launch the MSDN library directly from the Start menu by clicking the Microsoft Visual Studio 2012 Documentation entry, more regularly you’ll invoke it through the Help menu in Visual Studio or by means of the context-sensitive integrated help functionality. Places where help is readily available from the context (by pressing F1) include the Error List (to get information on compiler errors and warnings) and the code editor itself (for lookup of API documentation). Notice that starting with Visual Studio 2012, documentation is provided through the browser rather than a standalone application. This mirrors the online MSDN help very closely.
Since the introduction of Visual Basic 1.0 (as early as 1991), Rapid Application Development (RAD) has been a core theme of the Microsoft tools for developers. Rich designers for UI development are huge time savers over a coding approach to accomplish the same task. This was true in the world of pure Win32 programming and still is today, with new UI frameworks benefiting from designer support. But as you will see, designers are also used for a variety of other tasks outside the realm of UI programming.
In .NET 1.0, Windows Forms (WinForms) was introduced as an abstraction layer over the Win32 APIs for windowing and the common controls available in the operating system. By nicely wrapping those old dragons in the System.Windows.Forms class library, the creation of UIs became much easier. And this is not just because of the object-oriented veneer provided by it, but also because of the introduction of new controls (such as the often-used DataGrid control) and additional concepts, such as data binding to bridge between data and representation.
Figure 3.44 shows the Windows Forms designer in the midst of designing a UI for a simple greetings program. On the left, the Toolbox window shows all the available controls we can drag and drop onto the designer surface. When we select a control, the Properties window on the right shows all the properties that can be set to control the control’s appearance and behavior.
To hook up code to respond to various user actions, you can create event handlers through that same Properties window by clicking the “lightning” icon on the toolbar. Sample events include Click for a button, TextChanged for a text box, and so on. And the most common event for each control can be wired up by simply double-clicking the control. For example, double-clicking the selected button produces an event handler for a click on Say Hello. Now we find ourselves in the world of C# code again, as shown in Figure 3.45.
Figure 3.45. An empty event handler ready for implementation.
The straightforward workflow introduced by Windows Forms turned it into a gigantic success right from the introduction of the .NET Framework. Although we now have the Windows Presentation Foundation (WPF) as a new and more modern approach to UI development, there are still lots of Windows Forms applications out there. (So it’s in your interest to know a bit about it.)
With this, we finish our discussion of Windows Forms for now and redirect our attention to its modern successor: WPF.
Windows Presentation Foundation
With the release of the .NET Framework 3.0 (formerly known as WinFX), a new UI platform was introduced: Windows Presentation Foundation. WPF solves a number of problems:
- Mixed use of various UI technologies, such as media, rich text, controls, vector graphics, and so on, was too hard to combine in the past, requiring mixed use of GDI+, DirectX, and more.
- Resolution independence is important to make applications that scale well on different form factors.
- Decoupled styling from the UI definition allows you to change the look and feel of an application on-the-fly without having to rewrite the core UI definition.
- A streamlined designer-developer interaction is key to delivering compelling user experiences because most developers are not very UI savvy and want to focus on the code rather than the layout.
- Rich graphics and effects allow for all sorts of UI enrichments, making applications more intuitive to use.
One key ingredient to achieve these goals—in particular the collaboration between designers and developers—is the use of XAML. In essence, XAML is a way to use XML for creating object instances (for example, to represent a UI definition). The use of such a markup language allows true decoupling of the look and feel of an application from the user’s code. As you can probably guess by now, Visual Studio has an integrated designer (code named Cider) for WPF (see Figure 3.46).
As in the Windows Forms designer, three core panes are visible: the Toolbox window containing controls, the Properties window with configuration options for controls and the ability to hook up event handlers, and the designer sandwiched in between.
One key difference is in the functionality exposed by the designer. First of all, observe the zoom slider on the left, reflecting WPF’s resolution-independence capabilities. A more substantial difference lies in the separation between the designer surface and the XAML view at the bottom. With XAML, no typical code generation is involved at design type. Instead, XAML truly describes the UI definition in all its glory.
Based on this architecture, it’s possible to design different tools (such as Expression Blend) that allow refinement of the UI without having to share out C# code. The integrated designer therefore provides only the essential UI definition capabilities, decoupling more-involved design tasks from Visual Studio by delegating those to the more-specialized Expression Blend tool for use by professional graphical designers.
Again, double-clicking the button control generates the template code for writing an event handler to respond to the user clicking it. Although the signature of the event handler method differs slightly, the idea is the same. Figure 3.47 shows the generated empty event handler for a WPF event.
Figure 3.47. Code skeleton for an event handler in WPF.
Notice, though, that there’s still a call to InitializeComponent in theWindow1 class’s constructor. But didn’t I just say there’s no code generation involved in WPF? That’s almost true, and the code generated here does not contain the UI definition by itself. Instead, it contains the plumbing required to load the XAML file at runtime, to build up the UI. At the same time, it contains fields for all the controls that were added to the UI for you to be able to address them in code. This generated code lives in a partial class definition stored in a file with a .g.i.cs extension, as illustrated in Figure 3.48. To see this generated code file, toggle the Show All Files option in Solution Explorer.
Figure 3.48. Generated code for a WPF window definition.
Notice how the XAML file (which gets compiled into the application’s assembly in a binary format called Binary Application Markup Language [BAML]) is loaded through the generated code. From that point on, the XAML is used to instantiate the UI definition, ready for it to be displayed by WPF’s rendering engine.
As an aside, you can actually create WPF applications without using XAML at all by creating instances of the window and control types yourself. In other words, there’s nothing secretive about XAML; it’s just a huge convenience not to have to go through the burden of defining objects by hand.
Windows Workflow Foundation
A more specialized technology, outside the realm of UI programming, is the Windows Workflow Foundation (abbreviated WF, not WWF, to distinguish from a well-known organization for the conservation of the environment). Workflow-based programming enables the definition and execution of business processes, such as order management, using graphical tools. The nice thing about workflows is they have various runtime services to support transaction management, long-running operations (that can stretch multiple hours, day, weeks or even years), and so on.
The reason I’m mentioning WF right after WPF is the technology they have in common: XAML. In fact, XAML is a generic language to express object definitions using an XML-based format, which is totally decoupled from UI specifics. Because workflow has a similar declarative nature, it just made sense to reuse the XAML technology in WF, as well (formerly dubbed XOML, for Extensible Orchestration Markup Language).
Figure 3.49 shows the designer of WF used to define a sequential workflow.
The golden triad (Toolbox, Properties, and designer) is back again. This time in the Toolbox you don’t see controls but so-called activities with different tasks, such as control flow, transaction management, sending and receiving data, invoking external components (such as PowerShell), and so on. Again, the Properties window is used to configure the selected item. In this simple example, we receive data from an operation called AskUserName, bind it to the variable called name, and feed it in to a WriteLine activity called SayHello. The red bullet next to SayHello is a breakpoint set on the activity for interactive debugging, illustrating the truly integrated nature of the workflow designer with the rest of the Visual Studio tooling support.
For such a simple application it’s obviously overkill to use workflow, but you get the idea. A typical example of a workflow-driven application is order management, where orders might need (potentially long-delay) confirmation steps, interactions with credit card payment services, sending out notifications to the shipping facilities, and so on. Workflow provides the necessary services to maintain this stateful long-running operation, carrying out suspend and resume actions with state (de)hydration when required.
Also introduced right from the inception of the .NET Framework is ASP.NET, the server-side web technology successor to classic Active Server Pages (ASP). Core differences between the old and the new worlds in web programming with ASP-based technologies include the following:
- Support for rich .NET languages, leveraging foundations of object-oriented programming, eliminating the use of server-side script as with VBScript in classic ASP.
- First-class notion of controls that wrap the HTML and script aspects of client-side execution.
- Related to control support is the use of an event-driven approach to control interactions with the user, hiding the complexities of HTTP postbacks or AJAX script to make callbacks to the server.
- Various aspects, such as login facilities, user profiles, website navigation, and so on, have been given built-in library support to eliminate the need for users to reinvent the wheel for well-understood tasks. An example is the membership provider taking care of safe password storage, providing login and password reset controls, and so on.
- Easy deployment due to the .NET’s xcopy vision. For instance, when requiring a class library to be deployed to the server, there’s no need to perform server-side registrations in the world of .NET.
- A rich declarative configuration system makes deployment of web applications easier, having settings stored in a file that’s deployed with the rest of the application over any upload mechanism of choice.
From the Visual Studio point of view, ASP.NET has rich project support with a built-in designer and deployment facilities. Figure 3.50 shows ASP.NET’s page designer.
By now, designers should start to look very familiar. This time around, the markup is stored in HTML, containing various ASP.NET controls with an asp: prefix. The runat attribute set to server reveals the server-side processing involved, turning those controls into browser-compatible markup:
<asp:Button ID="Button1" runat="server" Text="Say Hello" />
Again, the Toolbox contains a wealth of usable controls available for web development, and the Properties window joins the party to assist in configuring the controls with respect to appearance, behavior, data binding, and more. Starting with Visual Studio 2012, the web page designer only shows the HTML and ASP.NET markup. No visual designer is included anymore, in favor of the separate Expression Web tool.
Hooking up event handlers is done from the markup view, by adding an attribute to the control, pointing at the handler method that can be generated on-the-fly. Figure 3.51 shows the result of adding a Click handler to a Button control. What goes on behind the scenes is much more involved. Although you still write managed code, ASP.NET wires up event handlers through postback mechanisms at runtime. With the introduction of AJAX, various postback operations can be made asynchronous as well. By doing so, no whole page refreshes have to be triggered by postback operations, improving the user experience a lot.
To simplify testing ASP.NET applications, a lightweight version of Internet Information Services (IIS), called IIS Express, comes with Visual Studio 2012. Figure 3.52 shows the notification area icon for IIS Express used in a debugging session (by a press of F5, for example).
Different application models to write web applications exist. This quick tour showed you the oldest approach using web forms. More recent additions to the ASP.NET stack include several versions of the MVC framework. Refer to books on ASP.NET for in-depth information.
Visual Studio Tools for Office
Office programming has always been an area of interest to lots of developers. With the widespread use of Office tools, tight integration with those applications provides an ideal interface to the world for business applications. Originally shipped as a separate product, Visual Studio Tools for Office (VSTO) is now integrated with Visual Studio and has support to create add-ins for the Office 2007 and later versions of Word, Excel, Outlook, PowerPoint, Visio, and InfoPath. Support for SharePoint development has been added, as well, significantly simplifying tasks like deployment, too.
One of the designer-related innovations in Visual Studio 2012 is built-in support to create Office ribbon extensions, as shown in Figure 3.53.
Figure 3.53. Ribbon designer support in Visual Studio 2012.
Modern software is rarely ever disconnected from other systems. Database-driven applications are found everywhere, and so are an increasing number of service-oriented applications. Server Explorer is one of the means to connect to a server, explore aspects of it, and build software components that are used to interact with the system in question. Figure 3.54 shows one view of Server Explorer, when dealing with database connections. Adding a Component file to the project, you get an empty design surface ready for drag and drop of different types of server objects.
Figure 3.54. Server Explorer with an active database connection.
Server Explorer has built-in support for a variety of commonly used server-side technologies, including the following:
- A variety of database technologies, with support for SQL Server, Access, Oracle, OLEDB, and ODBC. Connecting to a database visualizes things such as tables and stored procedures.
- Event logs are useful from a management perspective both for inspection and the emission of diagnostic information during execution of the program. .NET has rich support to deal with logging infrastructure.
- Management Classes and Events are two faces for the Windows Management Instrumentation (WMI) technology, allowing for thorough querying and modification of the system’s configuration.
- Message queues enable reliable, possibly offline, communication between machines using the Microsoft Message Queuing (MSMQ) technology. To send and receive data to and from a queue, a mapping object can be made.
- Performance counters are another cornerstone of application manageability, providing the capability to emit diagnostic performance information to counters in the system (for example, the number of requests served per second by a service).
- The Services node provides a gateway to management of Windows Services, such as querying of installed services, their states, and configuration and to control them. In fact, C# can even be used to write managed code OS services.
For example, in Figure 3.55, a component designer was used to create a management component containing management objects for a Windows server, a performance counter, and an event log. No code had to be written manually thanks to the drag-and-drop support from the Server Explorer onto the designer surface. You can use the Properties window to tweak settings for the generated objects.
Figure 3.55. Component designer surface with management objects.
Server Explorer is not only involved in the creation of management-focused components. In various other contexts, Server Explorer can be used to drive the design of a piece of software. One such common use is in the creation of database mappings, something so common we dedicate the whole next section to it.
Almost no application today can live without some kind of data store. An obvious choice is the use of relational databases, ranging from simple Access files to full-fledged client/server database systems such as SQL Server or Oracle. While having library support for communicating with the database is a key facility present in the .NET Framework through the System.Data namespaces, there’s more to it.
One of the biggest challenges of database technologies is what’s known as impedance mismatch between code and data. Where databases consist of tables that potentially participate in relationships between one another, .NET is based on object-oriented programming; therefore, a need exists to establish a two-way mapping between relational data and objects. In this context, two-way means it should be possible to construct objects out of database records, while having the ability to feed changes back from the objects to the database.
To facilitate this, various mapping mechanisms have been created over the years, each with its own characteristics, making them applicable in different contexts. At first, this might seem a bit messy, but let’s take a look at them in chronological order. We won’t go into detail on them: Whole books have been written explaining all of them in much detail. For now, let’s just deal with databases in .NET programming.
.NET Framework 1.0 started coloring the database mapping landscape by providing a means for offline data access. This was envisioned by the concept of occasionally connected clients. The core idea is as follows.
First, parts of a database are queried and mapped onto rich .NET objects, reflecting the structure of the database records with familiar managed types. Next, those objects can be used for visualization in UIs through mechanisms like data binding in ASP.NET and Windows Forms. In addition, objects can be directly updated in-memory, either directly through code or through data-binding mechanisms. An example of a popular control used in data binding is a DataGrid, which presents the data in a tabular form, just like Excel and Access do.
Visualizing and updating in-memory objects that originate from a database is just one piece of the puzzle. What about tracking the changes made by the user and feeding those back to the database? That’s precisely one of the roles of the offline mapping established through a DataSet, in collaboration with so-called data adapters that know how to feed changes back when requested (for example, by emitting UPDATE statements in SQL).
A DataSet can be used in two ways. The most interesting one is to create a strongly typed mapping where database schema information is used to map types and create full-fidelity .NET objects. For example, a record in a Products table gets turned into a Product object with properties corresponding to the columns, each with a corresponding .NET type.
To create a strongly typed DataSet, Visual Studio provides a designer that can interact with Server Explorer. This makes it incredibly easy to generate a mapping just by carrying out a few drag-and-drop operations. Figure 3.56 shows the result of creating such a mapping.
LINQ to SQL
After the relatively calm .NET 2.0 and 3.0 releases on the field of database mapping technologies, Language Integrated Query (LINQ) was introduced in .NET 3.5. As discussed in Chapter 2, “Introducing the C# Programming Language” (and detailed in Chapter 18, “Events,” and Chapter 19, “Language Integrated Query Essentials”), LINQ provides rich syntax extensions to both C# and VB, to simplify data querying regardless of its shape or origin. Besides LINQ providers used to query in-memory object graphs or XML data, a provider targeting SQL Server database queries shipped with .NET Framework 3.5.
In a similar way to the DataSet designer, LINQ to SQL comes with tooling support to map a database schema onto an object model definition. Figure 3.57 shows the result of such a mapping using the Northwind sample database. One core difference with DataSet lies in the SQL-specific mapping support, as opposed to a more generic approach. This means the LINQ to SQL provider has intimate knowledge of SQL’s capabilities required to generate SQL statements for querying and create/update/delete (CRUD) operations at runtime.
Similar to the DataSet designer, Server Explorer can be used to drag and drop tables (among other database items) onto the designer surface, triggering the generation of a mapping. Notice how relationships between tables are detected, as well, and turned into intuitive mappings in the object model.
Once this mapping is established, it’s possible to query the database using LINQ syntax against the database context object. This context object is responsible for connection maintenance and change tracking so that changes can be fed back to the database.
It’s interesting to understand how the designer generates code for the mapping object model. Most designers use some kind of markup language to represent the thing being designed. ASP.NET takes an HTML-centered approach, WPF uses XAML, and DataSet is based on XSD. For LINQ to SQL, an XML file is used containing a database mapping definition, hence the extension .dbml.
To turn this markup file into code, a so-called single file generator is hooked up in Visual Studio, producing a .cs or .vb file, depending on the project language. Figure 3.58 shows the code generation tool configured for .dbml files used by LINQ to SQL. The generated code lives in the file with .designer.cs extension. Other file formats, such as .diagram and .layout, are purely used for the look and feel of the mapping when displayed in the designer. Those do not affect the meaning of the mapping in any way.
Figure 3.58. How the DBML file turns into C# code.
Not surprisingly, the emitted code leverages the partial class feature from C# 2.0 once more. This allows for additional code to be added to the generated types in a separate file. But there’s more: A C# 3.0 feature is lurking around the corner, too. Notice the Extensibility Method Definitions collapsed region in Figure 3.59?
You’ll see such a region in the various generated types, containing partial method definitions. In the data context type in Figure 3.59, one such partial method is OnCreated:
public partial class NorthwindDataContext : System.Data.Linq.DataContext
#region Extensibility Method Definitions
partial void OnCreated();
public NorthwindDataContext(string connection)
: base(connection, mappingSource)
The idea of partial methods is to provide a means of extending the functionality of the autogenerated code efficiently. In this particular example, the code generator has emitted a call to an undefined OnCreated method. By doing so, an extensibility point has been created for developers to leverage. If it’s desirable to take some action when the data context is created, an implementation for OnCreated can be provided in the sister file for the partial class definition. This separates the generated code from the code written by the developer, which allows for risk-free regeneration of the generated code at all times.
ADO.NET Entity Framework
Finally, we’ve arrived at the latest of database mapping technologies available in the .NET Framework: the Entity Framework. Introduced in .NET 3.5 SP1, the Entity Framework provides more flexibility than its predecessors. It does this by providing a few key concepts, effectively decoupling a conceptual model from the mapping onto the database storage. This makes it possible to have different pieces of an application evolve independent of each other, even when the database schema changes. The Entity Framework also benefits from rich integration with the WCF services stack, especially OData-based WCF Data Services.
Figure 3.60 presents an architectural overview.
On the right is the execution architecture, a topic we’ll save for later. The most important takeaway from it is the ability to use LINQ syntax to query a data source exposed through the Entity Framework. In return for such a query, familiar .NET objects come back. That’s what mapping is all about.
Under the covers, the data source has an Entity Client Data Provider that understands three things:
- The conceptual model captures the intent of the developer and how the data is exposed to the rest of the code. Here entities and relationships are defined that get mapped into an object model.
- The storage model is tied to database specifics and defines the underlying storage for the data, as well as aspects of the configuration. Things such as table definitions, indexes, and so on belong here.
- Mappings play the role of glue in this picture, connecting entities and relationships from the conceptual model with their database-level storage as specified in the storage model.
To define both models and the mapping between the two, Visual Studio 2012 has built-in designers and wizards for the ADO.NET Entity Framework, as shown in Figure 3.61.
A proven technique to catch bugs and regressions early is to use unit tests that exercise various parts of the system by feeding in different combinations of input and checking the expected output. Various unit testing frameworks for .NET have been created over the years (NUnit being one of the most popular ones), and for the past few releases Visual Studio has built-in support for unit testing.
To set the scene, consider a very simple Calculator class definition, as shown here:
public static class Calculator
public static int Add(int a, int b)
return a + b;
public static int Subtract(int a, int b)
return a - b;
public static int Multiply(int a, int b)
return a * b;
public static int Divide(int a, int b)
return a / b;
To verify the behavior of our Calculator class, we want to call the calculator’s various methods with different inputs, exercising regular operation as well as boundary conditions. This is a trivial example, but you get the idea.
Unit tests in Visual Studio are kept in a separate type of project that’s hooked up to a test execution harness, reporting results back to the user. This underlying test execution infrastructure can also be used outside Visual Studio (for example, to run tests centrally on some source control server). While different types of test projects exist, unit tests are by far the most common, allowing for automated testing of a bunch of application types. Manual tests describe a set of manual steps to be carried out to verify the behavior of a software component. Other types of test projects include website testing, performance testing, and so on.
To create a unit test project, right-click the solution in Solution Explorer and choose Add, New Project to add a test project (see Figure 3.62).
Next, right-click the newly created project node in Solution Explorer, and choose Add Reference. In the Reference Manager dialog, add a reference to the project containing the Calculator (see Figure 3.63).
The unit test project contains an empty test class with an empty test method, as shown here:
public class UnitTest1
public void TestMethod1()
Our task is now to replace the code in the template with test methods that check the behavior of our Calculator. A much too simplistic example is shown here:
public void AddTest()
int a = 28;
int b = 14;
int expected = 42;
actual = Calculator.Add(a, b);
To assert the expected behavior, we use helper methods on the Assert class. For example, the Assert.AreEqual test checks for equality of the supplied arguments.
Once unit tests are written, they’re ready to be compiled and executed in the test harness. This is something you’ll start to do regularly to catch regressions in code when making changes. Figure 3.64 shows a sample test run result, triggered through the Test, Run, All Tests menu item.
Figure 3.64. Test results.
Turns out I introduced some error in the Subtract method code, as caught by the unit test. Or the test could be wrong. Regardless, a failed test case screams for immediate attention to track down the problem. Notice you can also debug through tests cases, just like regular program code.
Tightly integrated with unit testing is the ability to analyze code coverage. It’s always a worthy goal to keep code coverage numbers high (90% as a bare minimum is a good goal, preferably more) so that you can be confident about the thoroughness of your test cases. Visual Studio actually has built-in code highlighting to contrast the pieces of code that were hit during testing from those that weren’t.
To finish off our in-depth exploration of Visual Studio 2012 tooling support, we take a brief look at support for developing software in a team context. Today’s enterprise applications are rarely ever written by a single developer or even by a handful of developers. For example, the .NET Framework itself has hundreds of developers and testers working on it on a day-to-day basis.
Team System and Team Foundation Server
To deal with the complexities of such an organization, Visual Studio Team System (VSTS) provides development teams with a rich set of tools. Besides work item and bug tracking, project status reporting, and centralized document libraries, source control is likely the most visible aspect of team development.
The entry point for the use of Team Foundation Server (TFS) is the Team Explorer window integrated in Visual Studio 2012 (see Figure 3.65).
Figure 3.65. Team Explorer in Visual Studio 2012.
Here is a quick overview of the different parts of the Team Explorer:
- The drop-down at the top represents the TFS server we’re connected to. One of the nice things about TFS is its use of HTTP(S) web services (so there is no hassle with port configurations). Each server can host different team projects.
- Work Items is the collective name for bug descriptions and tasks assigned to members of the team. Queries can be defined to search on different fields in the database. Via the Work Items view, bugs can be opened, resolved, and so on.
- Documents displays all sorts of documentation—Word documents, Visio diagrams, plain old text files, and such—that accompany the project. Those are also available through a SharePoint web interface.
- Reports leverages the SQL Server Reporting Services technology to display information about various aspects of the project to monitor its state. Examples include bug counts, code statistics, and so on.
- Builds allows developers to set up build definitions that can be used for product builds, either locally or remotely. It’s a good practice for team development to have a healthy product build at all times. Automated build facilities allow configuration of daily builds and such.
- Source Control is where source code is managed through various operations to streamline the process of multiple developers working on the code simultaneously. This is further integrated with Solution Explorer.
Source control stores source code centrally on a server and provides services to manage simultaneous updates by developers. When a code file requires modification, it’s checked out to allow for local editing. After making (and testing) the changes, the opposite operation of checking in is used to send updates to the source database. If a conflicting edit is detected, tools assist in resolving that conflict by merging changes.
Figure 3.66 shows the presence of source control in Visual Studio 2012, including rich context menus in Solution Explorer and the Source Control Explorer window.
Figure 3.66. Source control integrated in Visual Studio 2012.
Other capabilities of source control include rich source code versioning (enabling going back in time), shelving edits for code review by peer developers, correlation of check-ins to resolved bugs, and the creation of branches in the source tree to give different feature crews their own playgrounds.