Understanding Disposables In .NET Dependency Injection – Part 3

Following on from Parts 1 and 2, in this final part of the series, I move on to dealing with types that you do not have source control for and therefore cannot change directly to hide the Dispose method using the techniques I have described in the previous posts.

Background

In the previous two parts of this series, I have made the assumption that you are able to amend the source code for types that implement IDisposable.

But what happens, if you don’t have the source code? This is where some well known design patterns come in useful.

Design Patterns

When trying to hide disposability from container consumers, the general principle (as shown in the previous parts of this series), is to use an interface that excludes the dispose method from its definition so that the consumer only receives the instance via the interface and not as the concrete class, thus hiding the Dispose method.

If we do not have the source code, we need to create an intermediary between the interface that we want to expose and the type that we want to use.

There are four classic design patterns , each a variation of the intermediary theme, that we can use together to achieve our goal.

I will not go into an in depth description of the patterns here as there are plenty of resources that can be a much better job, however, here is a brief overview.


Adapter and Bridge Patterns

You may already have another interface, either your own or some other third party that achieves the goal, but for which you do not have the source code or cannot change as it would break other dependencies. In this case, the adapter pattern is used to fill the gaps to make the two interfaces work with each other.

If the desired interface does not already exist, this is becomes the bridge pattern which does the same thing, but the interface design is within your control (whereas adapter uses an interface outside your control)

Façade Pattern

The core aim of our intermediary is to remove the Dispose method and in effect simplify the interface. If the third party has a number of members that you are not interested in or that need to be brought together into a single method, the Façade pattern can be used.

Proxy Pattern

The proxy pattern is used to prevent direct access to an object. It usually has an identical interface to the class that it represents.

Decorator Pattern

The decorator pattern is similar to the proxy pattern, but it can may additional functionality to enhance behaviour.


For our purposes, you are likely to use an adapter pattern if you do not have control over either interface, but more likely, you will be designing the interface to be used by consumers and therefore will write a custom class that brings together elements of the other three patterns, namely

  • Bridge – we will be creating one between the desired interface to receive calls and the target interface
  • Façade – we will be simplifying the interface to remove any members not needed
  • Proxy – passing through calls to members of our class on to the instance that we are hiding
  • Decorator – we may be taking the opportunity to do some additional work such as logging calls

Putting It Together

In the example below we have a class SomeDisposable that we do not have the source code for. The class implements IDisposable and has multiple methods, but only one of interest, namely – DoSomething().

Rather than register the class directly, we want to wrap it with an intermediary that will

  • Create an instance of DoSomething inside the constructor (as we do not want to register the DoSomething class with the container)
  • Implement a simplified interface (façade) of just the one DoSomething() method
  • Have a Dispose method (that is not exposed via the interface) that the container will call to proxy to the inner object’s Dispose() method
  • Decorate the inner DoSomething method with a call to the logger to log that the method has been called and when it has completed

Once we have the intermediary class, we can register this with the container in the StartUp class with the façade interface as the service type.

With this in place, we have managed to avoid consumers being able to dispose of the DoSomething singleton instance directly as it is hidden inside the intermediary class but the intermediary class (and in turn the inner instance) is still disposable by the container, but having put a façade in place, the consumer cannot call Dispose() directly.

Conclusion

That is the end of this series on preventing consumers causing problems by disposing of objects that may have a longer lifetime than the consumer when under the control of the .NET DI container.

I hope it has been of use.

Understanding Disposables In .NET Dependency Injection – Part 2

Following on from Part 1 where I provide an overview of hiding the Dispose method from consumers of the Dependency Injection container, in this part, I move on to dealing with objects that are created outside, but registered with the DI container.

Background

In part 1, I included the table below of extension methods that can be used to register services and implementations with the Microsoft DI container which indicates which of these will automatically dispose of objects that implement the IDisposable interface.

Method Automatic Disposal
Add{LIFETIME}<{IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}, {IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}>(sp => new {IMPLEMENTATION}) Yes
AddSingleton<{SERVICE}>(new {IMPLEMENTATION}) No
AddSingleton(new {IMPLEMENTATION}) No

In the last two of these methods, the instance is explicitly instantiated using the new keyword. Note, this style of registration is only available for Singletons and is not supported for Scoped or Transient lifetimes.

Wherever possible, if the class being registered implements IDisposable, I would encourage you not to use the last two methods, and instead use the third method where the instantiation takes place inside a lambda expression. This simple change of signature ensures that the object will be disposed by the container when its lifetime comes to an end.

As described in Part 1, I would also avoid registering the class as the service type as this will allow the consumer to call the Dispose method which can have unintended consequences, especially for singleton and scoped lifetime objects where the object may still be required by other consumers.

If for some reason it is not possible to use one of the other methods and a new instance must be instantiated outside of the container, there are ways of ensuring that the object is disposed of by the container.

Disposing Instantiated Singletons

If instances are instantiated as part of the container registration process (for example within the StartUp class’s ConfigureServices method), there is no natural place to dispose of these objects and therefore, they will live for the during of the application.

In most cases, this is not a major problem as, if written correctly, they will be disposed of when the application ends. However, we should try to explicitly clean up after ourselves when using unmanaged resources, but in this case, how?

When using ASP.NET Core, the Host takes care of registering a number of services for you that you may not be aware of.

One of the services that gets registered is IHostApplicationLifetime. This has three properties which return cancellation tokens which can be used to register callback functions that will be triggered when the host application starts, is about to stop and finally stops.

With this in place, we can register a callback to our StartUp class to dispose of our objects when the application is about to stop.

In order to do this, we can approach this in one of two ways, depending on where and how the object has been created during registration.

Startup Class Scoped Variable

If the instance has been created in the constructor of the StartUp class and assigned to a class level variable, this variable can be used to dispose of the object within the callback registered with the ApplicationStopping token returned from the IHostApplicationLifetime.

If you have several disposable singleton instances created in this manner, they can all be disposed of within the one method.

Instance Created Inside the ConfigureServices Method

If the instance has been created ‘on-the-fly’ within the registration and not captured in a class variable, we will need to obtain that instance from the container in order to dispose of it.

In order to obtain the instance, we need to request that instance within the Configure method in the StartUp class which gets called after the container has been created.

If you have several disposable singleton instances created in this manner, they can all be disposed of within the one method. However, obtaining these instances becomes a bit messy as you need to request them all either in the Configure methods parameter signature (which can become quite lengthy if more than a couple of types are required) or use the IApplicationBuilder’s ApplicationServices property to get access to the container’s IServiceProvider instance and use the GetService method to obtains the container registered instances.

Next Time

In Part 3 of this series, I will discuss hiding the Dispose method using intermediate classes based on common design patterns.

C# 9 Record Factories

With the release of .NET 5.0 and C# 9 coming up in the next month, I have been updating my Dependency Injection talk to incorporate the latest features of the framework and language.

One of the topics of the talk is using the “Gang of Four” Factory pattern to help with the creation of class instances where some of the data is coming from the DI container and some from the caller.

In this post, I walk through using the Factory Pattern to apply the same principles to creating instances of C# 9 records.

So How Do Records Differ from Classes?

Before getting down into the weeds of using the Factory Pattern, a quick overview of how C#9 record types differ from classes.

There are plenty of articles, blog posts and videos available on the Internet that will tell you this, so I am not going to go into much depth here. However, here are a couple of resources I found useful in understanding records vs classes.

Firstly, I really like the video from Microsoft’s Channel 9 ‘On.NET’ show where Jared Parsons explains the benefits of records over classes.

Secondly, a shout out to Anthony Giretti for his blog post https://anthonygiretti.com/2020/06/17/introducing-c-9-records/ that goes into detail about the syntax of using record types.

From watching/reading these (and other bits and bobs around the Internet), my take on record types vs. classes are as follows:

  • Record types are ideal for read-only ‘value types’ such as Data Transfer Objects (DTOs) as immutable by default (especially if using positional declaration syntax)
  • Record types automatically provide (struct-like) value equality implementation so no need to write your own boiler plate for overriding default (object reference) equality behaviour
  • Record types automatically provide a deconstruction implementation so you don’t need to write your own boiler plate for that.
  • There is no need to explicitly write constructors if positional parameters syntax is used as object initialisers can be used instead

The thing that really impressed me when watching the video is the way that the record syntax is able to replace a 40 line class definition with all the aforementioned boiler plate code with a single  declaration

Why Would I Need A Factory?

In most cases where you may choose to use a record type over using a class type, you won’t need a factory.

Typically you will have some framework doing the work for you, be it a JSON deserialiser, Entity Framework or some code generation tool to create the boilerplate code for you (such as NSwag for creating client code from Swagger/OpenAPI definitions or possibly the new C#9 Source Generators).

Similarly, if all the properties of your record type can be derived from dependency injection, you can register your record type with the container with an appropriate lifetime (transient or singleton).

However, in some cases you may have a need to create a new record instance as part of a user/service interaction by requiring data from a combination of

  • user/service input;
  • other objects you currently have in context;
  • objects provided by dependency injection from the container.

You could instantiate a new instance in your code, but to use Steve ‘Ardalis’ Smith‘s phrase, “new is glue” and things become complicated if you need to make changes to the record type (such as adding additional mandatory properties).

In these cases, try to keep in line with the Don’t Repeat Yourself (DRY) principle and Single Responsibility Principle (SRP) from SOLID. This is where a factory class comes into its own as it becomes a single place to take all these inputs from multiple sources and use them together to create the new instance, providing a simpler interaction for your code to work with.

Creating the Record

When looking at the properties required for your record type, consider

  • Which parameters can be derived by dependency injection from the container and therefore do not need to be passed in from the caller
  • Which parameters are only known by the caller and cannot be derived from the container
  • Which parameters need to be created by performing some function or computation using these inputs that are not directly consumed by the record.

For example, a Product type may have the following properties

  • A product name – from some user input
  • A SKU value  – generated by some SKU generation function
  • A created date/time value – generated at the time the record is created
  • The name of the person who has created the value – taken from some identity source.

The record is declared as follows

You may have several places in your application where a product can be created, so adhering to the DRY principle, we want to encapsulate the creation process into a single process.

In addition, the only property coming from actual user input is the product name, so we don’t want to drag all these other dependencies around the application.

This is where the factory can be a major help.

Building a Record Factory

I have created an example at https://github.com/stevetalkscode/RecordFactory that you may want to download and follow along with for the rest of the post.

In my example, I am making the assumption that

  • the SKU generation is provided by a custom delegate function registered with the container (as it is a single function and therefore does not necessarily require a class with a method to represent it – if you are unfamiliar with this approach within dependency injection, have a look at my previous blog about using delegates with DI
  • the current date and time are generated by a class instance (registered as a singleton in the container) that may have several methods to choose from as to which is most appropriate (though again, for a single method, this could be represented by a delegate)
  • the user’s identity is provided by an ‘accessor’ type that is a singleton registered with the container that can retrieve user information from the current AsyncLocal context (E.g. in an ASP.NET application, via IHttpContextAccessor’s HttpContext.User).
  • All of the above can be injected from the container, leaving the Create method only needing the single parameter of the product name

(You may notice that there is not an implementation of GetCurrentTimeUtc. This is provided directly in the service registration process in the StartUp class below).

In order to make this all work in a unit test, the date/time generation and user identity generation are provided by mock implementations that are registered with the service collection. The service registrations in the StartUp class will only be applied if these registrations have not already taken place by making user of the TryAddxxx extension methods.

Keeping the Record Simple

As the record itself is a value type, it does not need to know how to obtain the information from these participants as it is effectively just a read-only Data Transfer Object (DTO).

Therefore, the factory will need to provide the following in order to create a record instance:

  • consume the SKU generator function referenced in its constructor and make a call to it when creating a new Product instance to get a new SKU
  • consume the DateTime abstraction in the constructor and call a method to get the current UTC date and time when creating a new Product instance
  • consume the user identity abstraction to get the user’s details via a delegate.

The factory will have one method Create() that will take a single parameter of productName (as all the other inputs are provided from within the factory class)

Hiding the Service Locator Pattern

Over the years, the Service Locator pattern has come to be recognised as an anti-pattern, especially when it requires that the caller has some knowledge of the container.

For example, it would be easy for the ProductFactory class to take a single parameter of IServiceProvider and make use of the GetRequiredService<T> extension method to obtain an instance from the container. If the factory is a public class, this would tie the implementation to the container technology (in this case, the Microsoft .NET Dependency Injection ‘conforming’ container).

In some cases, there may be no way of getting around this due to some problem in resolving an dependency. In particular, with factory classes, you may encounter difficulties where the factory is registered as a singleton but one or more dependencies are scoped or transient.

In these scenarios, there is a danger of the shorter-lived dependencies (transient and scoped) being resolved when the singleton is created and becoming ‘captured dependencies’ that are trapped for the lifetime of the singleton and not using the correctly scoped value when the Create method is called.

You may need to make these dependencies parameters of the Create method (see warning about scoped dependencies below), but in some cases, this then (mis)places a responsibility onto the caller of the method to obtain those dependencies (and thus creating an unnecessary dependency chain through the application).

There are two approaches that can be used in the scenario where a singleton has dependencies on lesser scoped instances.

Approach 1- Making the Service Locator Private

The first approach is to adopt the service locator pattern (by requiring the IServiceProvider as described above), but making the factory a private class within the StartUp class.

This hiding of the factory within the StartUp class also hides the service locator pattern from the outside world as the only way of instantiating the factory is through the container. The outside world will only be aware of the abstracted interface through which it has been registered and can be consumed in the normal manner from the container.

Approach 2 – Redirect to a Service Locator Embedded Within the Service Registration

The second way of getting around this is to add a level of indirection by using a custom delegate.

The extension methods for IServiceCollection allow for a service locator to be embedded within a service registration by using an expression that has access to the IServiceProvider.

To avoid the problem of a ‘captured dependency’ when injecting transient dependencies, we can register a delegate signature that wraps the service locator thus moving the service locator up into the registration process itself (as seen here) and leaving our factory unaware of the container technology.

At this point the factory may be made public (or left as private) as the custom delegate takes care of obtaining the values from the container when the Create method is called and not when the (singleton) factory is created, thus avoiding capturing the dependency.


Special warning about scoped dependencies

Wrapping access to transient registered dependencies with a singleton delegate works as both singleton and transient instances are resolved from the root provider. However, this does not work for dependencies registered as scoped lifetimes which are resolved from a scoped provider which the singleton does not have access to.

If you use the root provider, you will end up with a captured instance of the scoped type that is created when the singleton is created (as the container will not throw an exception unless scope checking is turned on).

Unfortunately, for these dependencies, you will still need to pass these to the Create method in most cases.

If you are using ASP.NET Core, there is a workaround (already partially illustrated above with accessing the User’s identity).

IHttpContextAcessor.HttpContext.RequestServices exposes the scoped container but is accessible from singleton services as IHttpContextAcessor is registered as a singleton service. Therefore, you could write a delegate that uses this to access the scoped dependencies via a delegate. My advice is to approach this with caution as you may find it hard to debug unexpected behaviour.


Go Create a Product

In the example, the factory class is registered as a singleton to avoid the cost of recreating it every time a new Product is needed.

The three dependencies are injected into the constructor, but as already mentioned the transient values are not captured at this point – we are only capturing the function pointers.

It is within the Create method that we call the delegate functions to obtain the real-time transient values that are then used with the caller provided productName parameter to then instantiate a Product instance.

 

 

Conclusion

I’ll leave things there, as the best way to understand the above is to step through the code at https://github.com/stevetalkscode/RecordFactory that accompanies this post.

Whilst a factory is not required for the majority of scenarios where you use a record type, it can be of help when you need to encapsulate the logic for gathering all the dependencies that are used to create an instance without polluting the record declaration itself.

Simplifying Dependency Injection with Functions over Interfaces

In my last post, I showed how using a function delegate can be used to create named or keyed dependency injection resolutions.

When I was writing the demo code, it struck me that the object orientated code I was writing seemed to be bloated for what I was trying to achieve.

Now don’t get me wrong, I am a big believer in the SOLID principles, but when the interface has only a single function, having to create multiple class implementations seems overkill.

This was the ‘aha’ moment you sometimes get as a developer where reading and going to user groups or conferences plants seeds in your mind that make you think about problems differently. In other words, if the only tool you have in your tool-belt is a hammer, then every problem looks like a nail; but if you have access to other tools, then you can choose the right tool for the job.

Over the past few months, I have been looking more at functional programming. This was initially triggered by the excellent talk ‘Functional C#’ given by Simon Painter (a YouTube video from the Dot Net Sheffield user group is here, along with my own talk – plug, plug, here).

In the talk, Simon advocates using functional programming techniques in C#. At the end of the talk, he recommends the Functional Programming in C# book by Enrico Buonanno. It is in Chapter 7 of this book that the seed of this blog was planted.

In short, if your interface has only a single method that is a pure function, register a function delegate instead of an interface.

This has several benefits

  • There is a less code to write
  • The intent of what is being provided is not masked by having an interface getting in the way – it is just the function exposed via a delegate
  • There is no object creation of interface implementations, so less memory allocations and may be faster to initially execute (as there is no object creation involved)
  • Mocking is easier – you are just providing an implementation of a function signature without the cruft of having to hand craft an interface implementation or use a mocking framework to mock the interface for you.

So with this in mind, I revisited the demo from the last post and performed the following refactoring:

  • Replaced the delegate signature to perform the temperature conversion instead of returning an interface implementation that has a method
  • Moved the methods in the class implementations to static functions within the startup class (but could easily be a new static class)
  • Change the DI registration to return the results from the appropriate static function instead of using the DI container to find the correct implementation and forcing the caller to execute the implementation’s method.

As can be seen from the two code listings, Listing 1 (Functional) is a lot cleaner than Listing 2 (Object Orientated).

Listing 1

Listing 2

Having done this, it also got me thinking about how I approach other problems. An example of this is unit-testable timestamps.

Previously, when I needed to use a timestamp to indicate the current date and time, I would create an interface of ICurrentDateTime that would have a property for the current datetime. The implementation for production use would be a wrapper over the DateTime.Now property, but for unit testing purposes would be a mock with a fixed date and time to fulfill a test criteria.

Whilst not a pure function, the same approach used above can be applied to this requirement, by creating a delegate to return the current date and time and then registering the delegate to return the system’s DateTime.Now.

This achieves the same goal of decoupling code from the system implementation via an abstraction, but negates the need to create an unnecessary object and interface to simply bridge to the underlying system property.

If you are interested in looking at getting into functional programming while staying within the comfort zone of C#, I highly recommend Enrico’s book.

The demo of both the OO and Functional approaches can be found in the GitHub project at https://github.com/configureappio/NamedDiDemo.

 

 

 

Getting Dependencies by Name or Key using the .NET Core Container (Part 2)

If you have come here from a search engine, I suggest reading Part 1 first to get some background as to the problem that this post solves.

In Part 1, I gave the background as to why you may have a requirement to have a way of resolving instances from a DI container using a string name or some other key.

In this part, I propose a solution that leaves your code ignorant of any DI container and that is easily mockable for unit tests.

Making Use of Strongly Typed Delegates

A key thing to understand about DI containers is that the service type that is registered is exactly that – a type, not just interfaces and classes. Therefore, you can register delegates as well.

The idea of using delegates for resolution is not new.  There is an example in the Autofac documentation and this article on c-sharpcorner.

However, for .NET Core, when you search for something like ‘.NET Core named dependency injection’, you tend to only find the answer buried in StackOverflow answers.

That is why I have written this post, so that I can find the answer later myself, and also hopefully help other find it too.

Func<T> Delegates Vs. Strongly Typed Delegates

Before getting into a detail of using delegates as service types, we need to talk about generic Func delegates vs strongly typed delegates.

The generic Func type was introducted in C# 3.0 and is a generic delegate included in the System namespace that has multiple overloads that accept between zero and sixteen input parameters (denoted T1 to T16) and a single output parameter (TResult). These are convenience types so that as a developer you do not need to repeatedly create your own customer delegates.

For the purposes of retrieving an instance of a type from the DI container, the various Func delegates can be used as the type for service registration and as a parameter to the constructors of classes. E.g. Func<string, IServiceType>.

However, I have two reasons for not using this approach.

  1. If you change the signature of the service registration to a different overload of Func, you have no tools in the IDE to reflect that change throughout all the references (as the dependency resolution does not take place until runtime). Instead you will need to perform a text ‘search and replace’ task across your whole code base (and hope that there are no other uses of the generic signature for other purposes). By using a strongly typed delegate, this is a unique type and therefore, when changing the signature of the delegate, the compiler will indicate when references and instances are broken (or a tool like Reshaper will do the work for you)
  2. Though this is probably a very rare occurrence, you may have a need to have two different factories with the same signature. If the generic Func<T, IServiceType> is used, it will not be possible to differentiate the two factories. With strongly typed delegates, it is the delegate type that is registered and referenced in the constructor, so the same signature can be used with two different delegate types and the DI container will resolve tHem correctly.

How Does Using Typed Delegates Help?

By using typed delegates, the code that requires the named/keyed dependency is ignorant of the DI container technology that is resolving the instance for it.

This differs from other approaches where either the consuming class itself or an intercepting factory class takes the IServiceProvider as a dependency in its constructor and has logic to resolve the instance itself which defeats the purpose of dependency injection as it is creating a glue between the classes.

By registering the delegate within the DI container registration code, it keeps the codebase (outside of Di registration) ignorant of how the name/key resolution is working.

The Demo

For the rest of this post, I am going to use a test project (which can be found on GitHub at https://github.com/configureappio/NamedDiDemo) that converts temperatures to the Kelvin scale from four different scales – Centigrade, Fahrenheit, Rankine and Kelvins.

To do this, there are four formulas that are needed

Calculation Formula
Celcius(°C) to Kelvin(K) T(K) = T(°C) + 273.15
Fahrenheit(°F) to Kelvin(K) T(K) =  (T(°F) + 459.67) x 5 / 9
Rankine to Kelvin(K) T(K) = T(°R) × 5/9
Kelvin(K) to Kelvin(K) T(K) = T(K)

Each formula is implemented as a mapper class with an implementation of the IKelvinMapper interface as defined below.

Each of the implementations is registered with the DI container

For the caller to retrieve the correct conversion, we need a delegate that uses the input temperature scale in order to retrieve the correct mapper and return the mapper as the IKelvinMapper interface

Lastly, a lookup function is registered with the DI container against the delegate type as the service type to perform the lookup of the correct mapper. To keep things simple, the lookup is based on the first (capitalised) letter of the input temperature scale (normally as more robust lookup would be used, but I’ve tried to keep it simple so as not to distract from the core purpose of what I am demonstrating).

With the returned interface implementation, a conversation from the input temperature scale and value can be made to a consistent value in the Kelvin scale.

So How Does It Work?

The demo works on a very simple basis of leveraging the DI container to do all the hard work of determining which mapping converter is required. The caller just needs to know that it uses the delegate to take in the input temperature scale and value and return the value in Kelvins.

Object Instantiation

Depending on the complexity of the type to be instantiated, there are three ways to instantiate the object:

  1. Perform the instantiation with a ‘new’ keyword. This is fine if the constructor of the type has no dependencies and a new instance is required on each occasion. However, I would discourage this approach as it is not an efficient way of managing the lifetimes of the objects and could lead to problems with garbage collection due to long lived objects
  2. Rely on the DI container to instantiate the object. This is the approach taken in the demo. This has two benefits. The first is that if the object can be a singleton, then the DI container takes care of that for you provided the type has been registered as a singleton lifetime when registered. The second is that if the type has dependencies in the constructor, the DI container will resolve these for you
  3. Lastly, if one or more parameters to the type constructor need the be generated from the delegate input (and therefore not directly resolvable from prior DI registrations), but other parameters can be resolved, then the ActivatorUtilities CreateInstance method can be used to create an instance using a hybrid of provided parameters and using the DI container to resolve those parameters not provided.

This is where you take advantage of the ability to register a service resolution using a lambda expression as it provides access to the IServiceProvider container instance to retrieve instances from the service provider for (2) and (3) above.

Deciding Which Type to Instantiate

For simple keys, you can use a case statement inside a lambda expression to derive the type required and let the DI container perform the object instantiation for you. (This is the approach I have used in the demo project).

However, if you have many different items or complex keys, you may want to consider other ways of deriving the service type required.

For example, by registering a singleton Dictionary<string, serviceType> to map keys to types, if there are many keys, you may gain a performance boost from the hash lookup algorithm.

Instead of using the out-of-the-box generic dictionary thought, I recommend creating a dedicated class that implements IDictionary<string, Type> for the same reasons that I recommend using custom delegates over the generic Func<T> – it removes any ambiguity if there is (deliberately or inadvertently) more than one registration of the generic dictionary.

Care with Scoped Lifetimes

Careful thought needs to be given to the lifetime of both the factory delegate and the object being retrieved from the DI container.

Typically, as factory classes have no state,  they can usually be registered with a singleton lifetime. The same applies to the delegate being used as the factory or to access the factory class.

Where things get complicated is when the service type that the factory is instantiating for you has been registered in the DI container as having a scoped lifetime.

In this case, the resolution within the factory fails as the DI container cannot resolve scoped dependencies for constructor injection inside a singleton or transient lifetime object (in our case, the singleton factory delegate).

There are two ways to get around this problem.

(i) Register the factory delegate as scoped as well

(ii) Change the singleton factory delegate to take an IServiceProvider as part of the function signature. It is then the caller’s responsibility to pass the correctly scoped IServiceProvider to the delegate. However, this effectively takes us back to a service locator pattern again.

Accessing the Delegate for Calling

Now that we have abstracted the mapper instance generation into a delegate function, the client just needs to be able to resolve the instance.

For classes that want to use the mapper (and are also registered with the DI container), this is simply a case of making the delegate a constructor parameter.

Conclusion

Hopefully, these two posts have provided some insight into how the limitations of the Microsoft DI container to support named or keyed resolutions can be worked around without having to resort to using a third-party container or custom service locator pattern implementation.

Whilst not part of this two-part blog post, I have written another post about simplifying the demo to work in a more functional way by replacing the interface implementations with direct functions to (a) reduce the amount of code and (b) reduce the number of memory allocations required to create the objects.

Getting Dependencies by Name or Key using the .NET Core Container (Part 1)

If you want to go straight to the solution of how to resolve instances by a name or key without reading the background, go to Part 2 of this post.

Background

Many dependency injection container solutions support the concept of labeling a registered dependency with a name or a key that can be used by the container to resolve the dependency required when presented by a dependent class.

For instance, Autofac supports both named and keyed labeling using a syntax like this to register the dependency:

builder.RegisterType<OnlineState>().Named<IDeviceState>(“online”);

It provides both an explicit method to resolve the dependency, and an attributed way on the constructor parameters of a class.

var r = container.ResolveNamed<IDeviceState>(“online”);public class DeviceInfo
{
public DeviceInfo([KeyFilter(“online”)] IDeviceState deviceState)
{
}
}

However, the container that is provided out of the box with .NET Core does not support named or keyed registrations.

Mix And Match Containers

The Microsoft DI Container is a ‘Conforming Container‘ that provides a common abstraction that can be wired up to other containers such as Autofac, Ninject, StructureMap et al.

As a result, it only supports the lowest common denominator across multiple DI solutions, and more accurately, only what Microsoft deemed as necessary for its purposes.

This means that if you want to use features such as named/keyed resolution, you end up having to mix and match both the Microsoft DI Container and your other container of choice.

Now, most of the big name containers support this and provide methods that register the custom container so it can be retrieved in constructor injection and the custom features accessed using that containers methods.

However, the downside to this is that

  1. you are losing the common abstraction as you are including code that is specific to a container technology in your code base
  2. if you inject the specific container into a constructor, you are inadvertently being led towards the Concrete Service Locator Anti-Pattern where the specific container  is effectively acting as the service locator for you
  3. you are binding your solution to a particular container technology which needs changing across the codebase if you decide to use a different container.

“I could write an adapter …”

The first and last of these issues can be mitigated by creating an interface for the functionality you require to abstract your code away from a particular container technology and then writing an adapter.

However, you are still have a service locator being injected into constructors in order to get to the abstracted functionality (in this case, obtaining an instance based on a name or key).

Why Would I Need to Get An Instance By Name or Key? – Surely it is an Anti-Pattern?

To an extent, I agree that using a name string or key to resolve an instance is effectively a service locator as it is forcing a lookup within the container to retrieve an instance of the required type.

Under normal circumstances, I would be looking at ways to avoid using this way of resolving an instance such as creating alias interfaces.

For example, say I have three different classes FooA, FooB & FooC that all implement an interface IFoo.

I have a class that has needs all three implementations to perform some functionality. I have two choices in how I can inject the classes.

I can (a) either explicitly define the three class types as parameters in the constructor of my class, or (b) I can register all three classes as IFoo and define a single parameter of IEnumerable<IFoo>.

The downside to (a) is that I am binding to the concrete implementations which may limit my unit testing if I am unable to access or mock the concrete instances in my unit test project. It also means that if I want to replace one of the classes with a different implementation I would need to change the constructor signature of my class.

The downside to (b) is that the class receives the implementation instances in the order that the container decides to present them (usually the order of registration) and the class code needs to know how to tell the implementations apart. This effectively is moving the service locator inside my class.

To get around this scenario, I could create three interfaces IFooA, IFooB and IFooC that each directly inherits from IFoo. The three class implementations would then be implementations of the appropriate interface and can be registered with the DI container as such.

The receiving class can then explicitly define IFooA, IFooB and IFooC as parameters to the constructor. This is effectively the same as we had before, but makes the class more testable as the interfaces can be easily mocked in unit testing instead of being bound to the concrete classes.

Alternatively, we can still leave the classes registered as implementations of IFoo and have a constructor that takes IEnumerable as its parameter. However,  this will mean that our class has to iterate over the enumeration to identify which classes are implementations of IFooA, IFooB & IFooC and then cast as appropriate. It then also has to know how to handle multiple registrations of the three derived interfaces. E.g. what happens if there are three IFooA implementations? Which should be used? The first? The last? Chain all of them?

With all this said, I recently had a situation where I needed to resolve instances from the container using a string as the key, which is what got me thinking about this in the first place.

Data Driven Instance Resolution

Say you have a data source that has child records in multiple different formats. The association is defined by an identifier and a string that indicates the type of data.

In order to correctly retrieve the data and then transform it into a common format suitable for export to another system, you are reliant upon that string to identify the correct mapper to use. In this situation, we need some way of being able to identify a mapper class instance from the identifying string in order to perform the correct mapping work required.

You could create a class mapping factory that maintains a dictionary of the acceptable strings that will be in the data and the type required for that string, then has a method that provides an instance of the type for a particular string. E.g.

public interface IFooFactory
{
IFoo GetFooByKey(string key);
}

The problem here is that your implementation of IFooFactory becomes a service locator as it will need the IServiceProvider passed to its constructor in order to be able to create the instance required.

Ideally, you want your code to be ignorant of IServiceProvider so that you do not have to mock it in your unit tests.

This is the point where you want your DI container to be able to be able to inject by name or key.

This is where I come to the whole point of this post.

Resolving an Instance by Name from the Microsoft DI Container using a Strongly Typed Delegate.

We want our classes to be completely ignorant of the DI container being used and just be able to dynamically obtain the instance.

This is where our old friend, the delegate keyword can help us, which I describe in more detail in Part 2.

Clean Architecture – Should I Move the Startup Class to Another Assembly?

I was recently listening to an episode of the brilliant .Net Rocks where Carl and Richard were talking to Steve Smith (a.k.a @ardalis) in which he talks about clean architecture in ASP.Net Core.

One of the things discussed was the separation of concerns, where Steve discusses creating an architecture in which you try to break up your application in such a way that hides implementation detail in one project from another consuming project. Instead, the consuming project is only aware of interfaces or abstract classes from shared libraries from which instances are are created by the dependency injection framework in use.

The aim is to try and guide a developer of the consuming project away from ‘new-ing’ up instances of a class from outside the project. To use Steve’s catch phrase, “new is glue”.

I was listening to the podcast on my commute to work and it got me thinking about the project I had just started working on. So much so, that I had to put the podcast on pause to give myself some thinking time for the second half of the commute.

What was causing the sparks to go off in my head was about how dependencies are registered in the Startup class in ASP.Net Core.

By default, when you create a new ASP.Net Core project, the Startup class is created as part of that project, and it is here that you register your dependencies. If your dependencies are in another project/assembly/Nuget package, it means that the references to wherever the dependency is has to be added to the consuming project.

Of course, if you do this, that means that the developer of the consuming project is free to ‘new up’ an instance of a dependency rather than rely on the DI container. The gist of Steve Smith’s comment in the podcast was do what you can to help try to prevent this.

When I got to work, I had a look at the code and pondered about whether the Startup class could be moved out to another project. That way the main ASP.Net project would only have a reference to the new project (we’ll call it the infrastructure project for simplicity) and not the myriad of other projects/Nugets. Simple huh? Yeah right!

So the first problem I hit was all the ASP/MVC plumbing that would be needed in the new project. When I copied the Startup class to the new project, Visual Studio started moaning about all the missing references.

Now when you create a new MVC/Web.API project with .Net Core, the VS template uses the Microsoft.AspNetCore.All meta NuGet package. For those not familiar with meta packages, these are NuGet packages that bundle up a number of other NuGet packages – and Microsoft.AspNetCore.All is massive. When I opened the nuspec file from the cache on my machine, there were 136 dependencies on other packages. For my infrastructure project, I was not going to need all of these. I was only interested in the ones required to support the interfaces, classes and extension methods I would need in the Startup class.

Oh boy, that was a big mistake. It was a case of adding all the dependencies I would actually need one by one to ensure I was not bringing any unnecessary packages along for the ride. Painful, but I did it.

So I made all the updates to the main MVC project required to use the Startup class from my new project and remove the references I previously had to other projects (domain, repository etc) as this was the point of the exercise.

It all compiled! Great. Pressed F5 to run and … hang on what?

404 when MVC controller not found
404 when MVC controller not found

After a bit of head scratching, I realised the problem was that MVC could not find the controller? WHY?

At this point, I parked my so-called ‘best practice’ changes as I did not want to waste valuable project time on a wild goose chase.

This was really bugging me, so outside of work, I started to do some more digging.

After reading some blogs and looking at the source code in GitHub, the penny dropped. ASP.Net MVC makes the assumption that the controllers are in the same assembly as the Startup class.

When the Startup class is registered with the host builder, it sets the ApplicationName property in HostingEnvironment instance to the name of the assembly where the Startup class is.

The ApplicationName property of the IHostingEnvironment instance is used by the AddMvc extension to register the assembly where controllers can be found.

Eventually, I found the workaround from David Fowler in an answer to an issue on GitHub. In short, you need to use the UseSetting extension method on the IWebHostBuilder instance to change the assembly used in the ApplicationName property to point to where the controllers are. In my case this was as follows:

UseSetting(WebHostDefaults.ApplicationKey, typeof(Program).GetTypeInfo().Assembly.FullName)

Therefore, without this line redirecting the application name to the correct assembly, if the controllers are not in the same assembly as the Startup class, that’s when things go wrong – as I found.

With this problem fixed, everything fell into place and started working correctly.

However, with this up and running, something did not feel right about it.

The solution I had created was fine if all the dependencies are accessible from the new Infra project, either directly within the project or by referencing other projects from the Infra project. But what if I have some dependencies in my MVC project I want to add to the DI container?

This is where my thought experiment broke down. As it stood, it would create a circular reference of the Infra project needing to know about classes in the main MVC project which in turn referenced the Infra project. I needed to go back to the drawing board and think about what I was trying to achieve.

I broke the goal into the following thoughts:

  1. The main MVC project should not have direct references to projects that provide services other than the Infra project. This is to try to prevent developers from creating instances of classes directly
  2. Without direct access to those projects, it is not possible for the DI container to register those classes either if the Startup is in the main MVC project
  3. Moving the Startup and DI container registration to the Infra project will potentially create circular references if classes in the MVC project need to be registered
  4. Moving the Startup class out of the main MVC project creates a need to change the ApplicationName in the IHostingEnvironment for the controllers to be found
  5. Moving the Startup class into the Infra project means that the Infra project has to have knowledge of MVC features such as routing etc. which it should not really need to know as MVC is the consumer.

By breaking down the goal, it hit me what is required!

To achieve the goal set out above, a hybrid of the two approaches is needed whereby the Startup and DI container registration remain in the main MVC project, but registration of classes that I don’t want to be directly accessed in the MVC project get registered in the Infra project so access in the MVC project is only through interfaces, serviced by the DI container.

To achieve this, all I needed to do was make the Infra project aware of DI registration through the IServiceCollection interface and extension methods, but create a method that has the IServiceCollection injected into it from the MVC project that is calling it.

Startup Separation

The first part of the process was to refactor the work I has done in the Startup class in the Infra project and create a public static method to do that work, taking the dependencies from outside.

The new ConfigureServices method takes an IServiceCollection instance for registering services from within the infrastructure project, and also an IMvcBuilder as well, so that any MVC related infrastructure tasks that I want to hide from the main MVC project (and not dependent on code in the MVC project) can also be registered.

In the example above, I add a custom validation filter (to ensure all post-back check if the ModelState is valid rather than this being done in each Action in the MVC controllers)  and add the FluentValidation framework for domain validation.

To make things a bit more interesting, I also added an extension method to use Autofac as the service provider instead of the out of the box Microsoft one.

With this in place, a took the Startup class out of the Infra project and put it back into the MVC project and then refactored it so that it would do the following in the ConfigureServices method:

  • Perform any local registrations, such as AddMvc and any classes that are in the MVC project
  • Call the static methods created in the Infra project to register classes that are hidden away from the MVC project and use Autofac as the service provider.

I ended up with a Startup class that looked like this:

The Full Example

My description of the problem and my solution above only really scratches the surface, but hopefully it is of use to someone. It is probably better to let the code speak for itself, and for this I have created a Git repo with three versions of an example project which show the three different approaches to the problem.

First is the out-of-the-box do everything in the main project

Then there is the refactoring to do all the registration in the Infra project

Lastly, there is the hybrid where the Startup is in the main project, but delegates registration of non-MVC classes to the Infra project.

The example projects cover other things I plan on blogging about, so are a bit bigger that just dealing with separating the Startup class.

The repo can be found at https://github.com/configureappio/SeparateStartup

For details of the projects, look at the Readme.md file in the root of the repo.

Conclusion

In answer to the question posed in the title of this post, my personal view is that the answer is – “No” … but I do think that extracting out a lot of plumbing away from the Startup into another assembly does make things cleaner and achieves the goal of steering developers away from creating instances of classes and instead, relying on using the DI container to do the work it is intended for. This then helps promote SOLID principles.

Hopefully, the discussion of the trials and tribulations I had in trying to completely move the Startup.cs class show how painful this can be and how a hybrid approach is more suitable.

The underlying principle of using a clean code approach is sound when approached the correct way, by thinking through the actual goal rather than concentrating on trying to fix or workaround the framework you are using.

The lessons I am taking away from my experiences above are:

  • I am a big fan of clean architecture, but sometimes it is hard to implement when the frameworks you are working with are trying to make life easy for everyone and make assumptions about your code-base.
  • It is very easy to tie yourself up in knots when you don’t know what the framework is doing under the bonnet.
  • If in doubt, go look at the source code of the framework, either through Git repos or by using the Source Stepping feature of Visual Studio.
  • Look at ‘what’ you are trying to achieve rather than starting with the ‘how’ – in the case above, the actual goal I was trying to achieve was to abstract the dependency registration out of Startup rather than jumping straight in with ‘move whole of the Startup.cs’.

Hiding Secrets in appsettings.json – Using a Bridge in your ASP.Net Core Configuration (Part 4)

This is part 4 of a series where I have been looking at moving to a SOLID approach of implementing configuration binding in ASP.Net Core using a bridging class to remove the need for consumers of the configuration object to use IOptions<T>  or IOptionsSnapshot<T>. If you have arrived at this page from a search engine, I recommend looking at the previous posts Part 1Part 2 and Part 3 before moving onto this one.

In this post I move onto looking at injecting some functionality into the bridge class to decrypt settings and validate the settings read. Lastly I show registering the bridge class via multiple fine grained interfaces.

To follow along with this post, I suggest you download the full solution source code from the Github repo at https://github.com/configureappio/ConfiguarationBridgeCrypto as there is far too much code to display in this post.

Adding More DI Services

I will start with the changes to the Startup.cs ConfigureServices method which gives a structure to the changes we will be making.

The main highlights are:

  • A factory class is registered via its interface to decrypt values read from the settings
  • A class via its interface is registered to validate the settings
  • The bridge class is registered via an aggregate interface
  • A resolution lambda is registered for each of the component interfaces that make up the aggregate interface.

The classes and interfaces are now looked at in more detail below.

Encrypted Settings

For the purpose of this demonstration, I am assuming that for one reason or another,  the standard secure configuration providers such as Azure Key Vault cannot be used for one reason or another, so we are having to deal with encrypting the settings ourselves.

Heath Warning !!! 

In this demo, the encrypted settings are in the main appsettings.json. DO NOT DO THIS IN THE REAL WORLD!  Stick to the mantra that you should not put any secrets in your code source control. Always think, “Would I be OK with the source code repo going open source?”

In the source code, I have included code to read from an external file outside of the web code location so that the secrets are maintained outside of source code, but the settings could come from environmental variables or the command line. If copying the source code that accompanies this post, I suggest copying the appsettings.json to the location shown and removing outside of source code. It is up to you where to store it.

To keep things clean, all encrypted values are held in a dictionary within the AppSettings class in a property called Secrets. It will be the responsibility of the bridge class to decrypt the secrets and inject them into the properties exposed via the interfaces (more on this later).

The appsettings.json and matching MyAppSettings class will therefore look like this:

At this point, the DI container has been configured to return IOptionsSnapshot<MyAppSettings> by the services.Configure<MyAppSettings> line in the startup code.

The secrets dictionary key/value pairs have been encrypted using a hash for the key and AESManaged for the value. Both have then been Base64 encoded so that they can be cleanly represented in the JSON.

Decrypting the Settings

In order for the bridge class to decrypt the dictionary, we will need a class that will get injected into the bridge class via an ICryptoAlgorithm interface. To keep things flexible, we will use a factory pattern to create the decryptor instance.

In the example code, I register the results of calling the factory as a singleton that. However, you may want to register the factory and inject that into classes if you want to use multiple cryptographic algorithms or salt/password combinations.

Once we have the decryptor registered, we need to apply it to the settings. For this, we will have a SettingsDecryptor class that implements an ISettingsDecrypt interface.

This is registered with the DI container services ready to be used by the bridge class. The Decrypt method in the example takes a plain text version of the dictionary key then

  • hashes it with the method exposed by the injected decryptor,
  • looks up the hashed key (that is in the appsettings.json)
  • then decrypts the value for that key.

So with these pieces in place, we have the components for decrypting the settings in the bridge class.

Before looking at the bridge, we will look at injecting functionality to validate the settings.

Validating Settings

By injecting a settings validator into the bridge class, we have the ability to catch any problems before the rest of our code tries to use the values (encrypted or not).

The class implements an interface that validates the settings and returns a boolean to indicate success or failure. If validation has failed, an AggregateException instance is available that holds one or more validation exceptions.

There are more elegant ways in which this can be approached, but I used this approach for simplicity to illustrate the principle of injecting a validator into the bridge.

Once the validator is registered as a DI service, we are now ready to register the bridge class that takes both the decryptor and validator.

Take It To The Bridge

The example above is fairly self-explanatory at a high level. The constructor takes the IOptionsShapshot<MyAppSettings> to get the settings class that has been constructed by the DI service using the Options pattern. This gives us access to the bound object.

We then have a decryptor which is stored as an instance field for use by the property getters to decrypt values read from the settings object.

We then have a validator instance which we call immediately to validate the settings and throw an exception if there is a problem.

The three properties exposed are proxies to the underlying settings class, with decryption taking place where necessary.

Interface Segregation

The bridge class implements the IAppSettingsResolved interface which is an aggregate of three other interfaces.

This has been done to illustrate that the settings can be registered as multiple interfaces to allow for interface segregation as part of the SOLID approach. E.g. if a controller is only interested in the SQL Server connection string, it can just ask for that rather than the full  IAppSettingsResolved. This makes it easier to implement just the required functionality in any mocks when unit testing or any other implementation you may want to register.

The registration above uses the service for IAppSettingsResolved to resolve the three other interfaces.

Conclusion

Having described the working parts above, we can come back to the ConfigureSevices method to tie it together.

Here we have done the following

  • Registered an IOptionsSnapshot<MyAppSettings> using the Configure method to bind the “MyAppSettings” configuration section to an object instance
  • Registered a decryption algorithm
  • Registered a class instance to decrypt the key/value pairs in the Secrets dictionary using the algorithm
  • Registered a class instance to validate the settingsop
  • Registered a class instance to act as the bridge/proxy to the IOptionsSnapshot<MyAppSettings> and decrypt the values
  • Registered the resolved bridge class using its multiple exposed interfaces for finer grained use

With the last of these in place, our controllers and any other dependant classes can choose whether to get the settings as a whole via IAppSettingsResolved or one of the finer grained interfaces

  • IAppSettings – the non-encrypted values
  • ISqlConnectionString – the decrypted SQL connection string
  • IOracleConnectionString – the decrypted Oracle connection string

Taking Things Further

That wraps up this series of posts for now, but you may want to take things further now that the settings class can have functionality injected into it.  Possibilities include

  • Using connection string builders to create connection strings using multiple properties (some encrypted, some not) from the bound object
  • Using the ICryptoFactory instead of ICryptoAlgorithm to use multiple algorithms for different properties

Don’t forget, the full source code including a WPF app to encrypt the settings dictionary can the downloaded from the Github repo at https://github.com/configureappio/ConfiguarationBridgeCrypto

Thanks for reading.

Creating a Bridge to your ASP.Net Core Configuration from your Controller or Razor Page (Part 3)

This is the third in my series of posts looking at how to remove the need for controllers and razor pages to have knowledge of the options pattern in ASP.Net Core.

If you have arrived here from a search engine or link, I would recommend reading the previous two posts which set the background before coming back to this post.

Creating a Bridge to your ASP.Net Core Configuration from your Controller or Razor Page (Part 1)

Creating a Bridge to your ASP.Net Core Configuration from your Controller or Razor Page (Part 2)

In Part 2, I looked at how a lambda expression could be used to act as a bridge between the IOptionsSnapshot and T by creating an additional DI service:

If all you are concerned with is ensuring that your controller or razor page does not need to refer to the options pattern, then this is a suitable solution.

However, you may have the need to do something more exotic with the configuration settings such as perform some transformation such as decrypting some data or would like to perform  a validation of the settings before invalid values get injected into a class. This is achievable using the lambda, but it becomes somewhat messy.

Instead, the approach I will describe in this post will be to split the MyAppSettings class from the previous post out into three parts:

  • An interface that defines the properties as read only values IMyAppSettings
  • A settings reader class, MyAppSettingsReader,  that implements the interface but also implements setters so that the values can be mapped into an instance by the Configure extension method
  • A bridge class, MyAppSettingsBridge, that takes the IOptionsSnapshot in the constructor and then presents itself as the interface by using the Value method to get the value that has been read from configuration.

The last part of the jigsaw is then to register the bridge as a transient service with the DI container. From therein, the controllers, razor pages and any other class that needs the settings will just need a parameter of type IMyAppSettings.

Splitting the Class Up

In the previous post, the MyAppSettings class looked like this:

We will now divide it up, starting with the interface:

Note that the properties have been defined as read-only. Given that the configuration source(s) are read only as far as the code is concerned ( JSON files, XML files, environmental variables etc.), it is unlikely you will have code that would change the values.

Then comes the two implementation of the interface:

The MyAppSettingsReader is a simple DTO that the Configure extension method can map the configuration settings to – and therefore does need setters as well as the getters.

This class does not strictly need to implement the interface as it is there simply to map settings from the configuration into an object. You could include other properties that may be components that will be combined as a value returned in a property exposed in the interface. E.g. say you have a property that is a database connection string.

The second class is the bridge itself, MyAppSettingsBridge which must implement the interface as it will be used in the DI container. Note the constructor takes IOptionsSnapshot as the parameter and then acts as the go-between for the properties for the interface.

In the example class above, the class is effectively doing the same as the lambda expression from the previous post by calling the Value property on the options object.

However, by using a class, you can add more functionality. Taking the database connection example again, you could have individual properties for the server name, the database name, etc. in the reader class which can be then passed as parameters into a connection string builder inside the bridge class whose result is then exposed as a ConnectionString property in the interface.

In my plans for another post, I will be showing how values could be encrypted in the configuration settings source and then decrypted by the bridge class.

Wiring It All Up

Now we have the interface, reader and bridge, it is time to wire it all up.

Firstly, we will set up the DI container to read the configuration into the reader. This will automatically create a DI service for IOptionsSnapshot. Next we register the bridge as a transient instance of the IMyAppSettings interface.

Why register as Transient?It is up to you really. If registered as a transient, then every call will get the latest version of the configuration injected into it from IOptionsSnapshot. 

If registered as a singleton, the IOptionsSnapshot does not reload the configuration on each request.

If you are not worried about having the ability to read the configuration without restarting the application, use the singleton and also change IOptionsSnapshot to IOptions so that the configuration monitor is not required.

Now we are all set with the DI container, so we can change our controller to take IMyAppSettings as the parameter to the controller.

The code for this post and the previous post is available on GitHub at https://github.com/configureappio/configurebridgedemo1

In the next post, I will look at injecting more functionality into the bridge to decrypt some settings before they get injected into the controller.