Archive for the ‘MEF’ Tag

Managed Extensibility Framework (MEF) and other extensibility options in .NET…

mef2

In recent years the patterns and frameworks used to facilitate extensibility in applications and frameworks is getting more and more attention.  The idea of extensibility becomes important if we are writing applications and frameworks that need to be flexible in order to have broad use.  These applications often need to accommodate situations that are completely unknown at the time they are being written.  Allowing for extensibility can make applications and frameworks more resilient and somewhat future proof.

There are several patterns and frameworks that are often used for extensibility.  I will cover a few of these ending off with a discussion of the Managed Extensibility Framework that will be built into .NET 4.0 (the next release of .NET yet to be released).

Inversion of Control/Dependency Injection

One such extension pattern that has become popular over the last few years is the idea of “Inversion of Control” and “Dependency Injection.”  The “Inversion of Control” pattern inverts the more traditional software development practice where objects and there lifetimes are created directly by the code that uses them.  Using the Inversion of Control pattern, object creation and lifetime are controlled and configured at the upper rather then the lower levels of the application usually through what’s known as an Inversion of Control container (IOC container).  Once configured, the IOC container can be used to create objects with a specified interface.  When using IOC containers, it is important that object dependencies inside your application are done using interfaces rather than specifying the actual classes.  IOC Containers facilitate extensibility because the actual classes used in lower levels of your application can be changed down the road without needing to edit these class libraries, making your code flexible and resilient.

There are several very popular IOC containers that are available for use in your code.  Many of these containers offer lots of robust features that allow you to control the lifetime of the created objects as well as being able to perform Dependency Injection (DI).  Using Dependency Injection an IOC container can automatically set properties and constructor parameters of a created object when the object is being built up by the IOC container.  Very powerful stuff.

The following are some of the more popular IOC containers available today:

  • Unity : This is an IOC container developed by Microsoft’s Patterns and Practices group built on top of Enterprise Library’s Object Builder.
  • Castle Windsor : This was one of the original IOC container implementations and is very sophisticated in what it can do and how it can be configured.
  • Structure Map : This IOC container implementation created by “Alt.net” blogger Jeremy Miller to help facilitate software development best practices including test driven development. This IOC implementation introduced an innovative “fluent” configuration interface that makes it easy to configure without using XML.
  • Ninject : This is another innovative IOC/Dependency Injection container written by Nate Kohari that also uses an XML-free fluent interface to configure.

 

Managed Add-in Framework (MAF)

Another extensibility option offered by Microsoft is the Managed Add-in Framework (MAF).  This was introduced in the System.AddIn namespace in .NET 3.5.  This framework is a great way to allow 3rd parties to write plugins for your application.

The interesting thing about this framework is that plugins can be configured to run inside their own app domain.  This is great because it prevents 3rd party plugins from crashing your application.  It also can allow “sandboxing” of a 3rd party add-in so it can be more tightly secured.

Another interesting feature of the this framework is that it provides a way to allow plugins to be fully forward and backward compatible with different versions of a host application.  This is possible because there is actually an isolation layer that exists between the application and its plugins.  This isolation is achieved using the add-in pipeline (a.k.a. the communication pipeline).  It is this pipeline that provides the support for the versioning and isolation provided by this framework.

maf1

In simple terms, a contract is defined for the add-in that is implemented in both the host application and the add-in itself.  When developing an add in, the Host and Add-in side adapters and views are code generated into four different assemblies using a “Pipeline Builder” which is a Visual Studio Add-in that can be found here. These generated parts do most of the heavily lifting needed to do things like crossing app domains and handling add-in activation.

The “Host View of the add-in” and the “Add-in View” are both generated abstract classes that contain separate views of the object model that both the host and the add-in share.  The side adaptor classes are used to adapt methods to and from the contract.  The contract, which is an interface inherited from IContract, is the only type that is loaded with both the host and the add-in.

Addins created with this framework can be shared across different applications as long as they share the same contract.  This makes these add-ins very versatile.

As I mentioned above, another huge benefit of MAF is the ability to have forward and backward compatibility when using addins.  Below shows a diagram of a host application that has been updated from version 1 to version 2 and how it can consume addins built for the new version (V2) as well as the older version (V1).

maf2

To make this possible, the V1 addin just needed a new Addin side adaptor to adapt the V2 contract to the V1 addin.

As you can see, MAF is a pretty powerful plug-in technology.  The main down side that I can see are the number of assemblies that need to be created for a plugin (although, they are generated).

Here are some interesting articles on MAF, if you’d like to know more:

 

Managed Extensibility Framework (MEF)

I learned about a new extensibility framework at the PDC08 called the Managed Extensibility Framework (MEF).  Since I have started following conversations and blog posts on MEF, I have seen lots of talk about how it compares to other extensibility options like the ones described in this blog post (i.e. IOC Containers and MAF).  The MEF team is quick to point out that although MEF has a lot in common with these that it is not being designed as a replacement for IOC containers or MAF.

In the words of Krzysztof Cwalina, a program manager on the .NET Framework team:

MEF is a set of features referred in the academic community and in the industry as a Naming and Activation Service (returns an object given a “name”), Dependency Injection (DI) framework, and a Structural Type System (duck typing). These technologies (and other like System.AddIn) together are intended to enable the world of what we call Open and Dynamic Applications, i.e. make it easier and cheaper to build extensible applications and extensions.

MEF takes the idea of extensibility many steps forward in that it allows for applications to be created from composable parts.  Unlike MAF which can connect a host application to an addin that implements a specific interface, MEF doesn’t really distinguish between host and addin, instead it allows interfaces to be designated as either Import or Export then uses MEF’s ComposibleContainer (which is similar to an IOC container) to hookup the imports and exports at runtime as objects are requested from the ComposibleContainer.  This takes the IOC concept further because MEF actually manages the dependencies between parts.

To break down MEF into its core building blocks, MEF consists of a “Catalog” that holds meta-data regarding “Parts” that can be used to Compose an application.  The “Parts” are objects that can expose their behavior to other parts (which is referred to as an Export) or the Parts can consume the behavior of other parts (which is referred to as an Import).  Parts can even export their behavior and at the same time import the behavior of another part.  These parts are composed at runtime using all of the import and export meta-data collected by a ComposibleContainer.

So as you can see, MEF is really all about composition rather then a pure plugin architecture because the lines between host and extension are blurred.  In fact the host application can expose its functionality to the extensions as well as the extensions exposing functionality that can be used by the host application.  The other interesting thing is that the extensions themselves can be composed from each other where a given extension can depend on functionality offered by another extension in a very loosely coupled way.  Managing all of these dependencies is something that MEF does exceptionally well.

In Glenn Block’s session, "A Lap Around the Managed Extensibility Framework", at the PDC08, he talked about this composition in terms of “Needs” and “Haves”.  For example, a part A may say that “I have a toolbar” while another part B may say “I need a toolbar”.  In this simple case, part A would “Export” its toolbar interface while part B “Imports” a toolbar interface.  Even though part A and part B know nothing about each other, MEF’s ComposibleContainer can “hookup” these two parts to compose the total functionality at runtime.

Above, we talked about the basic building blocks in MEF.  Below is another diagram that expands this view just a bit more:

mef1

From this diagram you can see that a “Part” can contain have both “Imports” and “Exports”.  As previously stated, it is the responsibility of the CompositionContainer to use the knowledge of the “Imports” and “Exports” to compose the application.  It is also important to note that the parts themselves can be exposed to the CompositionContainer from a number of sources called Catalogs.  There are many different types of  catalogs, these include catalogs that can be hard coded with assemblies that contain parts, catalogs that can bring in parts that exist in assemblies stored in a specified directory or even custom catalogs that can get parts from a WCF service (to name a few).

In the examples I’ve seen of catalogs that pull from a directory there has also been a notification mechanism that would allow new parts to be included and composed at runtime using a file watcher.  This allows very interesting scenarios where an application/framework can be extended even while it is running.

So how is all of this composition accomplished in the MEF framework.  At first glance, it would appear that in order to properly determine what behavior a given class imports and/or exports, the CompositionContainer would need to load each class in order to determine its capabilities. This sounded very slow.  But, instead, as I looked further, a given Part is decorated with attributes that are used to determine what “contract” a given class imports and exports.  So it is this static meta data that is interrogated rather than having to load the individual classes.  By building up this meta-data, makes it possible to statically verify the dependency graph for all of the parts in the container.

A contract in MEF is simply a string identifier rather then an actual .NET type.  So, even though a contract is meant to identify a set of specific functionality, it is really just a name that is assigned to that functionality so that it can be identified.  This allows the CompositionContainer to match Imports up Exports when it is composing the application.  At runtime, it is assumed that an Export with the same contract as an Import will be able to be cast to the type of the Import at Runtime.  If the actual .NET type of the named Export does not match the .NET type of the associated Import property, a casting exception is thrown.

These Import and Export attributes have three main constructors: 

  • A default constructor that takes no parameters.  For this constructor, the contract name is defaulted to the fully qualified name of the type of the class, property or method it decorates.
  • A constructor that accepts a .NET Type.  For this constructor, the contract name is set to the fully qualified name of the passed in type. 
  • A constructor that accepts a string. For this constructor, the contract name is set to this passed in string.  Although this is an option, in the current MEF implementation, the other two constructors are recommended over this one because using this constructor can lead to naming collisions if you are not careful.

MEF allows Import attributes to decorate properties.  These properties can be for an object or for a delegate.  While an Export attribute can decorate a class, a property or a method.

Previously, MEF supported what is known as “Duck-Typing”.  This allowed MEF to compose an Import and Export together as long as the “shape” of the import and export interfaces were the same.  The concept of “Duck-typing” is more prevalent in dynamic languages and it is basically the idea that if two interfaces look the same, that is, their properties and methods have the same names and use the same parameter types that they are equivalent even if their static types are not the same. In a nutshell, “duck-typing” basically says that if it looks like a duck and quacks like a duck it must be a duck.

This “Duck-Typing” was previously achieved in MEF by generating IL on the fly. In the current release of MEF, this duck-typing functionality has been removed.  I’m told that removing the previous implementation of Duck-typing was a difficult decision that centered around the fact the implementation that was used would be difficult to maintain.  While the Duck-typing was fairly straight forward to implement in simple cases where simple types were used as parameters of methods, etc.  It could become somewhat unwieldy when dealing with complex types as parameter (especially if those complex types contained methods and properties using complex types, and so on and so on…). 

From what I understand the MEF team may bring back similar duck-typing functionality in a future version of MEF either through the NOPIA work coming in .NET 4.0 or using other new dynamic functionality that will be a part of .NET 4.0, but would not commit to when or if this would happen. See my previous blog post, “The future of C#…”,  for more information about the new dynamic features that are being added into the .NET 4.0 CLR.

OK…that’s interesting, but why would Duck-typing be important for future versions of MEF (in my opinion)?  What advantages would it bring?

Many plugin-extensibility solutions require a third interface assembly that can be shared between two assemblies if they are to be composed at runtime.  This requires that this interface assembly be available to the different teams that may be building these various plugin assemblies. This interface assembly can be a problem in cases where we wish to compose assemblies that were not necessarily built to be composed together.  In these cases, it is very powerful to be able to match these interfaces in a looser way at runtime.   This makes these composable solutions much more resilient to changes by not having to depend on actual static types.  As you can see, this looser typing will be important and will be much anticipated in the future of MEF.

A example of MEF in code….

To allow a more concrete view of MEF, lets look at a simple code to give a better understanding of how MEF is actually used:

namespace MyNamespace
{ 
    public interface ISomethingUseful
    {
       void UsefulMethod();
       int UsefulProperty {get; set; }
    }

    [Export(typeof(ISomethingUseful))]
    private class SomethingUseful : ISomethingUseful
    {
       void ISomethingUseful.UsefulMethod1();
       int ISomethingUseful.UsefulProperty {get; set; }
    }

    private class Bar
    {
       [Import]
       public ISomethingUseful SomethingUseful { get; set; }
    }

    public void Compose()
    {
       var catalog = new AttributedTypesPartCatalog(typeof(Foo), typeof(Bar));
       _container = new CompositionContainer(catalog.CreateResolver());
       _container.Compose();
    }
}
 

The above code is very simple but shows how imports and exports are defined.  The “Import” attribute doesn’t specify a contract so it uses the fully qualified name of the type that it decorates ( in this case “MyNamespace.ISomethingUseful” ).  The catalog type used in this example simply specifies the classes that are to be composed.  Then its “Resolver” class is put into the the container.  The resolver is what creates the actual instances of the classes.  Finally the call to the container’s Compose() method is where everything comes together and all of the classes are created and hooked up using the specified “Import” and “Export” directives.

This is just a very simple case where there exists a one to one cardinality between an “Import” and an “Export”.  It is also possible to express multiple parts that export the same contract along with an import that accepts multiple parts with the same contract.  An example of how these parts might be defined is as follows:

namespace MyNamespace
{
    [Export(typeof(ISomethingUseful))]
    private class AnotherSomethingUseful : ISomethingUseful
    {
       void ISomethingUseful.UsefulMethod1();
       int ISomethingUseful.UsefulProperty {get; set; }
    }

    private class AnotherBar
    {
       [Import]
       public IEnumerable<ISomethingUseful> UsefulSomethings { get; set; }
    }
}
 

In the above code we see another class that exports “MyNamespace.ISomethingUseful” along with another class that imports one to many “IMyNamespace.SomethingUseful” parts by specifying the type as IEnumerable.

MEF is designed specifically to handle large applications and frameworks that have high extensibility requirements.  One of the first Microsoft applications to take a dependency on MEF is Visual Studio.  In Scott Guthrie’s keynote address at PDC08, he showed an example of where Visual Studio 2010 was using a MEF plugin to show an HTML view of C# code comments inside the actual code while it was being edited.  Because of MEF, Visual Studio itself had no idea that it comments were being presented differently inside the editor because the comment viewer was a MEF extension point within Visual Studio.

I hope that I have been able to give you a feel for what MEF is, although I’ve only just touched on what it can do.  I think that this framework will open up many interesting opportunities to support very rich extensibility scenarios not possible with other extensibility patterns and frameworks.

If you are interested in learning more about MEF, I would suggest you watch the session on it that was given at the PDC:

Managed Extensibility Framework: Overview – presenter: Glenn Block

S

WMV-HQWMV | MP4 | PPTX

The following screencast is also very good because it shows some simple coding examples using MEF:

DNRTV Show #130: Glenn Block on MEF, the Managed Extensibility Framework

Here are some other links that are also worth reading:

MEF Community Site

Ayende Rahien’s blog post on how MEF differs from IOC

Sidar Ok Bloggings on MEF

PDC08, from my perspective…

Welcom to the PDC08

Welcome to the PDC08

The PDC was a great conference showing much of Microsoft’s future vision for their products and platforms.

I would say that this conference could be broken up into several major areas of interest:

  • Oslo
  • Azure Cloud Operating System
  • Live Services
  • User Interface – Silverlight
  • Languages – C#, Dynamic Languages, F#
  • Windows 7

Overall, I was very impressed with Microsoft’s ability as a company to coordinate the efforts of many diverse groups and technologies throughout the company. It seems that many of the efforts and initiatives that Microsoft has undertaken are starting to come together into a common cohesive vision. That said, it was also evident to me that they still have a bit of work to do to bring all of this new technology together for prime time usage. Although the PDC is an event that usually occurs only every few years, I did hear a rumor that Microsoft has already announced a PDC09. This makes me believe that the timing of this PDC may have been a little aggressive. For many of the Azure and Live services sessions the presenter was in constant IM contact with the Microsoft datacenter. That tells me that there were lots of stability concerns regarding the products that had strong reliance on their cloud services.

My personal interests drove me to many of the sessions on their cloud-based services and Oslo. It was pretty well know that Microsoft would be announcing a cloud-based platform and Live Mesh client-side platform but many of the details were clouded in secrecy before the PDC. Oslo was also talked about a little before the PDC and was shown in a little more detail at the PDC.

Over the last several years Microsoft has been building datacenters at a record pace. During the 1st keynote address, Ray Ozzie talked about how Microsoft realized that in building up their own web-based internet properties that many of these activities were being undertaken by many large and smaller companies around the world. While many large companies could afford to build large datacenters that provide reliability, redundancy, fault tolerance, etc. Many not so big companies were struggling with this overwhelming task. Even very large companies were having trouble with scaling out their services to handle geo-location and fault tolerance. It was becoming clear to Microsoft that cloud based services to complement on premises software was needed. In his keynote, Ozzie recognized similar efforts by both Google and Amazon in this area.

With Windows Azure, Microsoft differentiates itself with cloud based services offered by both Google and Microsoft by offering a service that is:

  • Abstracted out to be completely elastic, by allowing computing power to by dynamically sized and scaled at runtime so that you can handle peak loads without paying for more than you need.
  • Geo-location, by being able to spread a cloud-based deployment across the globe.
  • Fully fault tolerant. All data and software in the cloud is located in several places in the cloud and always spread between servers at different locations.
  • An infrastructure that can easily be connected to on-premises software through an internet service bus.
  • And the list goes on…

With Windows Azure will come many interesting deployment tools that will make deploying and managing applications in the cloud easy.  Microsoft has created many new and interesting technologies to create what they call “Fabric”.  This “Fabric” is the abstraction that sits on top of the actual servers that are running inside their many datacenters.  You can think of this “Fabric” as an abstraction that is a few levels higher than virtual machines.

From all of the various sessions on Azure and cloud-based services it was very clear that the new world of cloud-based software was going to require us to think differently about how we architect, design and implement our software so that it is better suited to fit into a cloud-based paradigm which, in my opinion, is where things will be headed in the next ten years.  Many of these practices involve the proper decoupling of software as well as other practices that are just part of good software design.

As part of this new cloud-based initiative, it is clear that WCF and WF (Windows Workflow) are going to be two very fundamental enabling technologies. Up to the point, we haven’t seen too much usage of Windows Workflow but it is clear that this will be a large part of many cloud-based applications.  As part of this, Microsoft announced a new server product called “Dublin” which is an application server built using WCF services that front Windows Workflows.  This server product has many advances in Windows Workflow that make it an ideal choice for hosting workflow in the cloud.

There were also talks given on the new data technologies that expand Sql Server into the cloud.  These were dubbed “Sql Services”.  One of the new services is Sql Server Data Service.  This service provides a new way to retrieve data over the internet using REST-based protocols.  These new REST protocols provide ways to query and update data using standard HTTP verbs, such as Get, Post, Put and Delete.  These protocols are built on the idea that specific data resources are uniquely addressable using URIs.  REST based protocols are built to scale like the internet itself.  It was clear that in addition to SOAP based protocols, Microsoft was heavily investing in REST-based protocols as well.

I also attended a session on storing scalable data in the cloud. From what I could gather, one will need to rethink how data is organized in order to take full advantage over the scalability that the cloud offers  What was described was very close to what Amazon does with it’s data storage services.  Basically, there are three storage items in cloud based storage services: blobs, tables and queues.  From what I could understand, the table storage was pretty basic.  Much of the storage seemed to center around entities which play nicely into Microsoft’s Entity Framework (which was just released with .NET 3.5 service pack 1).  They alluded to providing more relational storage in then cloud in the future but none of the cloud storage options had any relational capabilities.  I would imagine that much of this is because they want to provide a massively scalable and reliable data platform and the relational aspects would make this task much more difficult (which is the same route that Amazon has appeared to take).  Not sure where relational cloud storage would fit it but I would think that they would need to have a story here.

In addition to recognizing how difficult it is for companies to write highly scalable software and the need for cloud-based computing, Microsoft has put a considerable investment into “Oslo” which is a software modeling technology.  In recent years, software has become more and more complex.  It is also recognized that there are many aspects of software development that needs to be coordinated. These aspects include business analysis, software architecture, software design and implementation, software deployment. etc.

Over the past 20 years, there have been many attempts to model the software development process.  Many of these attempts, like Rational Rose, have had very limited success.  With “Oslo” Microsoft has taken software modeling to the extreme.  This appears to be one of Microsoft’s most ambitious projects in recent years.  In “Oslo” all of the modeling is stored in a database repository.  On top of this repository there is a highly customizable user interface that is used to explore this repository.  The user interface is very interesting and provides very in depth views into a given software application and how it connects to other various applications and services.

In addition to providing a graphical view of software, “Oslo” also provides a new modeling language called “M” that can be used to model software.  “M” makes it easy to also create Domain Specific Languages (DSLs) to make it easy to create applications.

As I mentioned, all of the modeling is stored inside a Sql Server database repository.  Runtimes that are meant to drive WCF/WF etc. are then driven from the data stored in the Oslo repository.

While I was very impressed by what I saw in “Oslo” it appeared to be a long way off.  Done right, this could really change the way we write and think about software in the future although it will be many years, in my opinion, before “Oslo” will be able to make that type of impact.

Well…that’s the long drawn out overview of PDC.  I will add more posts that target the specific sessions I attended along with links to the online videos of those sessions.