Home > Articles > Programming > Windows Programming

Inversion of Control with the Managed Extensibility Framework (MEF)

  • Print
  • + Share This
Building modular and extensible applications in .NET is much easier now with the inclusion of the Managed Extensibility Framework (MEF) in .NET 4.0. Jeremy Likness explains why MEF is also the perfect solution for inversion of control, showing how to use it in .NET applications.
Like this article? We recommend

Like this article? We recommend

The Managed Extensibility Framework (MEF) is an exciting new technology that ships with the .NET 4.0 runtime. It is included in the full .NET Framework as well as the Silverlight CLR. MEF helps you build applications that are incredibly extensible and easy to maintain. While it has many advanced features, from tagging plug-ins with metadata to dynamic XAP loading in Silverlight, it also functions as a very robust Inversion of Control (IoC) container.

The Problem: Decoupling

Many principles govern software development. One important concept that applies to most enterprise software is the idea of decoupling. Decoupling refers to the logical (and sometimes physical) separation of components in the software application. A direct dependency between two components creates coupling: one component cannot function without the other. A highly coupled system is fragile because a change to one component can affect many other components. Components can never be completely decoupled (otherwise they would never be able to interact), but you can apply several guiding principles to help lower the amount of dependencies within the system:

  • Single-responsibility principle
  • Open/closed principle

The following sections examine these principles in detail.

The Single-Responsibility Principle

The single-responsibility principle states that a component should be written to do only one thing. Consider a software package for a calculator. The calculator draws a fancy device on the screen with advanced graphics, accepts user input, and performs computations. Each of these tasks, while part of the whole system, is really a separate responsibility.

In a highly coupled system, one piece of code might accept the input, perform the calculations, and return the result. The problem with this approach is that changes to the input inevitably require changes to the rest of the code. Testing the code is nearly impossible without actually typing input into the calculator. By dividing the application into separate components, each of which has a single responsibility, you can minimize the overhead of refactoring and achieve unit testing more easily, as shown in Figure 1. One class simply focuses on performing the calculations, independently of how the inputs were received. Another class deals with the display separately. A third class handles accepting the user inputs.

Figure 1 Coupled system versus decoupled system.

The Open/Closed Principle

Another approach is the open/closed principle, which states that a component should be open for extension but closed for modification. This design builds on the single-responsibility principle by stating that a well-constructed component will hide the logic of how it handles its responsibility from other components that interface with it. The component might be extended with new functionality, but any modifications happen internally.

One example of this technique would be storing internal data using a local database rather than an XML document. If the component exposed data using a database-specific artifact, other components would have to inherit a dependency to that database. On the other hand, a component constructed to hide those details and simply return strongly typed data can be refactored to use a different storage mechanism, and this change wouldn't affect other components.

Interfaces and Implementation

The challenge with decoupled systems is providing a way for components to communicate efficiently without creating related dependencies. This communication is typically handled by using interfaces. An interface is a simple contract that describes what's expected from a component. The interface (or signature) of a component exposes inputs and outputs for processes that the component supports, but it doesn't provide any implementation. Components can be written to interact with the interface, without having a dependency on the underlying component.

A travel agent is a real-world example of an interface. The contract accepts the input of the details of your vacation, along with the necessary funds for transportation and lodging. The output is the airline ticket, hotel confirmation, and itinerary. Behind the scenes, the agent might purchase your tickets online or by calling the airline directly. The tickets might be faxed, emailed, or sent through regular mail. The outcome is the same—your dependency is on the interface, not the "implementation," so you aren't concerned with the details of how your trip was booked. The interface might look like this:

public interface IBookTravel
{
    Itinerary BookVacation(VacationRequest request, double funds);
}

Programming using interfaces introduces another hurdle: How is the implementation created? It's not enough for your component to ask for an itinerary. Somehow the interface must be satisfied with an actual component, whether it's the agent on the phone or the Internet web page. Imagine if you satisfied the interface like this:

public class BookingManager
{
    public void DoBooking()
    {
        IBookTravel booking = new BookTravelByPhone();
        ...
    }
}

You're using the interface for your communication to the booking component, but creating the implementation introduced a dependency. Now your program "knows" the details of how the vacation is booked, and it has a dependency: If the method changes, the program must change as well.

How do we solve this problem? The Inversion of Control pattern is exactly what we need.

  • + Share This
  • 🔖 Save To Your Account