Using OpenApiReference To Generate Open API Client Code

This is not one of my usual blogs, but an aide-mémoire for myself that may be of use to other people who are using the OpenApiReference tooling in their C# projects to generate C# client code for HTTP APIs from Swagger/OpenApi definitions.

Background

I have been using Rico Suter’s brilliant NSwag for some time now to generate client code for working with HTTP API endpoints. Initially I was using the NSwag Studio application to create the C# code and placing the output into my project, but then I later found I could add the code generation into build process using the NSwag MSBuild task.

Recently though, I watched the ASP.NET Community Standup with Jon Galloway and Brady Gaster. In the last half hour or so, they discuss the Connected Services functionality in Visual Studio 2019 that sets up code generation of HTTP API endpoint clients for you.

This feature had passed me by and watching the video got my curiosity going as to whether the build chain that I have been using for the last couple of years could be simplified. Especially given that, behind the scenes, it is using NSwag to do the code generation.

Using Connected Services

Having watched the video above, I recommend reading Jon Galloway’s post Generating HTTP API clients using Visual Studio Connected Services As that post covers the introduction to using Connected Services, I won’t repeat the basics again here.

It is also worth reading the other blog posts in the series written by Brady Gaster:

One of the new things I learnt from the video and blog posts is to make sure that your OpenApi definitions in the source API include an OperationId (which you can set by overloads of  the HttpGet, HttpPost (etc) attributes on your Action method) to help the code generator assign a ‘sensible’ names to the calling method in the code generated client.

Purpose of This Post

Having started with using the Visual Studio dialogs to set up the Connected Service, the default options may not necessarily match with how you want the generated code to work in your particular project.

Having had a few years’ experience of using NSwag to generate code, I started to dig deeper into how to get the full customisation I have been used to from using the “full” NSwag experience but within the more friendly build experience of using the OpenApiReference project element.

One Gotcha To Be Aware Of!

If you use the Connected Services dialog in Visual Studio to create the connected service, you will hit a problem if you have used a Directory.Packages.props file to manage your NuGet packages centrally across your solution. The Connnected Services wizard (as at time of writing) tries to specific versions of NuGet packages.

This is part of a wider problem in Visual Studio (as at time of writing) where the NuGet Package Manager interaction clashes with the restrictions applied in Directory.Packages.props. However, this may be addressed in future versions as of the NuGet tooling and Visual Studio per this Wiki post.

If you are not familiar with using Directory.Packages.props, have a look at this blog post from Stuart Lang

Manually Updating the OpenApiReference Entry in your Project

There isn’t much documentation around on how to make adjustments to the OpenApiReference element that gets added to your csproj file, so hopefully this post will fill in some of the gaps until documentation is added to the Microsoft Docs site.

I have based this part of the post on looking through the source code at https://github.com/dotnet/aspnetcore/tree/main/src/Tools/Extensions.ApiDescription.Client and therefore some of my conclusions may be wrong, so proceed with caution if making changes to your project.

The main source of information is the Microsoft.Extensions.ApiDescription.Client.props file which defines the XML schema and includes comments that I have used here.

The OpenApiReference and OpenApiProjectReference Elements

These two elements can be added one or multiple times within an ItemGroup in your csproj file.

The main focus of this section is the OpenApiReference element that adds code generation to the current project for a specific OpenApi JSON definition.

The OpenApiProjectReference allows external project references to be added as well. More on this below,

The following attributes and sub-elements are the main areas of interest within the OpenApiReference element.

The props file makes references to other properties that live outside of the element that you can override within your csproj file.

As I haven’t used the TypeScript generator, I have focussed my commentary on the NSwagCSharp code generator.

Include Attribute (Required)

The contents of the Include attribute will depend on which element you are in.

For OpenApiReference this will be the path to the OpenApi/Swagger json file that will be the source the code generator will use.

For OpenApiProjectReference this will be the path to another project that is being referenced.

ClassName Element (Optional)

This is the name to give the class that will be generated. If not specified, the class will default to the name given in the OutputPath parameter (see below)

CodeGenerator Element (Required)

The default value is ‘NSwagCSharp’. This points to the NSwag C# client generator, more details of which below.

At time of writing, only C# and TypeScript are supported, and the value here must end with either “CSharp” or “TypeScript”. Builds will invoke a MSBuild target named “Generate(CodeGenerator)” to do actual code generation. More on this below.

Namespace Element (Optional)

This is the namespace to assign the generated class to. If not specified, the RootNamespace entry from your project will be used to put the class within your project’s namespace. You may choose to be more specific with the NSwag specific commands below.

Options Element (Optional)

These are the customisation instructions that will be passed to the code generator as command line options. See Customising The Generated Code with NSwag Commands below for details about usage with the NSwagCSharp generator

One of the problems I have been having with this element is that the contents are passed to the command line of the NSwagCSharp as-is and therefore you cannot include line breaks to make it more readable.

It would be nice if there was a new element that allows each command option to be listed as an XML sub-element in its own right that the MSBuild target concatenates and parses into the single command line to make editing the csproj file a bit easier.

Possible Options Declaration

OutputPath (Optional)

This is the path to place generated code into. It is up to the code generator as to whether to interpret the path as a filename or as a directory.

The Default filename or folder name is %(Filename)Client.[cs|ts].

Filenames and relative paths (if explicitly set) are combined with
$(OpenApiCodeDirectory). Final value is likely to be a path relative to
the client project.

GlobalPropertiesToRemove (OpenApiProjectReference Only – Optional)

This is a semicolon-separated list of global properties to remove in a ProjectReference item created for the OpenApiProjectReference. These properties, along with Configuration, Platform, RuntimeIdentifier and
TargetFrameworks, are also removed when invoking the ‘OpenApiGetDocuments’ target in the referenced project.

Other Property Elements

In the section above, there are references to other properties that get set within the props file.

The properties can be overridden within your csproj file, so for completeness, I have added some commentary here

OpenApiGenerateCodeOptions

The Options element above if not populated defaults to the contents of this element, which in of itself is empty by default.

As per my comment above for Options, this suffers the same problem of all command values needing to be on the same single line.

OpenApiGenerateCodeOnBuild

If this is set to ‘true’ (the default), code is generated for the OpenApiReference element and any OpenApiProjectReference items before the BeforeCompile target is invoked.

However, it may be that you do not want the generated called on every single build as you may have set up a CI pipeline where the target is explicitly invoked (via a command line or as a build target) as a build step before the main build. In that case, the value can be set to ‘false’

OpenApiGenerateCodeAtDesignTime

Similar to OpenApiGenerateCodeOnBuild above, but this time determines whether to generate code at design time as well as being part of a full build. This is set to true by default.

OpenApiBuildReferencedProjects

If set to ‘true’ (the default), any projects referenced in an ItemGroup containing one or many OpenApiProjectReference elements will get built before retrieving that project’s OpenAPI documents list (or generating code).

If set to ‘false’, you need to ensure the referenced projects are built before the current project within the solution or through other means (such as a build pipeline) but IDEs such as Visual Studio and Rider may get confused about the project dependency graph in this case.

OpenApiCodeDirectory

This is the default folder to place the generated code into. The value is interpreted relative to the project folder, unless already an absolute path. This forms part of the default OutputPath within the OpenApiReference above and the OpenApiProjectReference items.

The default value for this is BaseIntermediateOutputPath which is set elsewhere in your csproj file or is implicitly set by the SDK.

Customising The Generated Code with NSwag Commands

Here we get to the main reason I have written this post.

There is a huge amount of customisation that you can do to craft the generated code into a shape that suits you.

The easiest way to get an understanding of the levels of customisation is to use NSwag Studio to pay around with the various customisation options and see how the options affect the generated code.

Previously when I have been using the NSwag MSBuild task, I have pointed the task to an NSwag configuration json file saved from the NSwag Studio and let the build process get on with the job of doing the code generation as a pre-build task.

However, the OpenApiReference task adds a layer of abstraction that means that just using the NSwag configuration file is not an option. Instead, you need to pass the configuration as command line parameters via the <Options> element.

This can get a bit hairy for a couple of reasons.

  • Firstly, each command has to be added to one single line which can make your csproj file a bit unwieldy to view if you have a whole load of customisations that you want to make (scroll right, scroll right, scroll right again!)
  • Secondly, you need to know all the NSwag commands and the associated syntax to pass these to the Options element.

Options Syntax

Each option that you want to pass takes the form of a command line parameter which

  • starts with a forward slash
  • followed by the command
  • then a colon and then
  • the value to pass to the command

So, something like this: /ClientBaseClass:ClientBase

The format of the value depends on the value type of the command of which there are three common ones

  • boolean values are set with true or false. E.g. /GenerateOptionalParameters:true
  • string values are set with the string value as-is. E.g. /ClassStyle:Poco
  • string arrays are comma delimited lists of string values. E.g.
    /AdditionalNamespaceUsages:MyNamespace1,MyNamespace2,MyNamespace3

The following table is a GitHub gist copy from the GitHub repository I have set up for this and which I plan to update over time as I get a better understanding of each command and its effect on the generated code.

At time of writing, many of the descriptions have been lifted from the XML comments in the code from the NSwag repository on GitHub.

(Apologies the format of the imported markdown here is not great. I hope to make this a bit better later when I can find the time. You may want to go direct to the gist directly)

Conclusion

The new tooling makes the code generation build process itself a lot simpler, but there are a few hoops to jump through to customise the code generated.

I’ve been very impressed with the tooling and I look forward to seeing how to it progresses in the future.

I hope that this blog is of help to anyone else who wants to understand more about the customisation of the OpenApiReference tooling and plugs a gap in the lack of documentation currently available.

Styles of Writing ASP.NET Core Middleware

In this post, I discuss the differences between convention and factory styles of writing middleware in ASP.NET Core along with the differences in how the instances are created and interact with dependency injection.

Background

If you have been using ASP.NET Core for a while, you are probably familiar with the concept of middleware. If not, I would recommend reading the Microsoft Docs page that provides an overview of how middleware conceptually works

In this post, I will be delving deeper into how middleware is added into the request-response pipeline with references to the code in UseMiddlewareExtensions.cs.


The link I have used here is to the excellent https://source.dot.net/  web site where you can easily search for .NET Core/5 code by type or member name instead of trawling through the ASP.NET Core GitHub repos.


Before delving into the mechanics of how the pipeline is built and works, lets start with how the middleware gets registered with your application.

Registering Your Own Custom Middleware

When you need to add middleware to your ASP.NET Core application, it is usually done within your Startup class in the Configure method.

There are three main ways of registering middleware in the Configure method, namely by using the generic and non-generic UseMiddleware extension methods and lastly the Use method on IApplicationBuilder.

Let’s look at each of these in a bit more detail in order of ease of use (which also happens to be a top down order of execution).

UseMiddleware<TMiddleware>

In most cases, you will be encapsulating your middleware into a class which adheres to either a convention or an interface (more on this in a bit). This allows you to reuse your middleware code if it is in its own class library project.

The simplest way to register your middleware class within the Configure method is to use the UseMiddleware<TMiddleware> extension method on the IApplicationBuilder instance that is passed into the Startup’s Configure method.

To call this method, you need to supply a generic parameter <TMiddleware> that is set to the type of your middleware class as shown here.

The method has an optional parameter (args) for passing in an array of objects that represent parameters that can be passed into the constructor of your class. This can be of use when not all of the constructor parameters can be resolved by dependency injection.

Behind the scenes, this method makes a call to the non-generic UseMiddleware method. To do this, the generic method gets a Type instance from the generic type parameter and passes it (with the optional args array if present) to the non-generic method which does the hard work.

UseMiddleware (Non-Generic Version)

Most of the time, you will be using the generic version, but it is worth knowing there is a non-generic version if you need to derive the type at runtime.

Let’s look at the start of the method to see what is going on

In this method, the code checks to see if the middleware type passed in as a parameter implements the IMiddleware interface. If it does, the method branches off to use a  middleware factory that will use dependency injection to create the instance of the middleware class

Otherwise, it makes a call to the IApplicationBuilder’s Use method by passing in an anonymous function that uses reflection to create the required request delegate (not show here as it is a lot of code – if interested, look here).

The IApplicationBuilder Use Method

The IApplicationBuilder.Use method is the ‘raw’ way of registering middleware. The method takes a single parameter of type

Func<RequestDelegate, RequestDelegate>

This delegate may be a method or an anonymous function created by a lambda expression, but must adhere to the delegate signature of

Task RequestDelegate(HttpContext context)

In this example from the Microsoft Docs web site, we are not actually doing anything, but it gives an example of an implementation where you may want to intercept the pipeline and do something before and/or after the next middleware in the pipeline is executed (by the next.Invoke() in this example).

What is interesting is seeing how the RequestDelegate is expressed as an anonymous function.

The context parameter gives us access to an HttpContext instance which in turn gives us access to the request and response objects. However, you must not  make any changes to the response if another middleware is to be invoked as once the response has started (by a later middleware in the pipeline), any changes to it will cause an exception.

If you do want to return a response, your middleware becomes a terminating or a short-circuiting middleware and does not invoke any further middlewares in the pipeline.


In this post I want to keep the focus on understanding how middleware works with dependency injection lifetimes rather than the mechanics of how the middleware pipeline itself is built and executes each pipeline middleware in turn .

For a deeper dive into how request delegates interact with each other to form a pipeline, I highly recommend Steve Gordon’s  Deep Dive – How Is The Asp Net Core Middleware Pipeline Built?  which goes deep into the chaining of request delegates.


Middleware Styles : In-Line vs Factory vs Convention

Ultimately, all middleware eventually boils down to becoming a request delegate function that gets passed into the Use method on the IApplicationBuilder implementation instance.

In-Line Middleware

You can write your middleware directly inside an anonymous function (or method) that gets passed to the Use method via a delegate parameter. The call to the Use method is written inside the Configure method of the Startup class (on via the Configure method on the HostBuilder).

This is the approach you usually take if want to do something simple in your middleware that has no dependencies  that would need to be resolved by the dependency injection container as the delegate does not interact with the container.

That does not mean that you cannot get the container to resolve instances for you. It just means that you have to explicitly ask the container to resolve a service via the RequestServices property on the HttpContext instance (that is passed as a parameter into the delegate).

This moves you into the realms of the service locater anti-pattern, but given that you are usually creating the delegate within the confines of the  application startup, this is not so much of a concern as doing it elsewhere in your application landscape.

As the code is all written in-line, it can become a bit of a pain to read and debug if it is doing many things or you have multiple Use entries as the Configure method becomes a but unwieldy.

To avoid this, you could extract the contents of the Use anonymous function to their own methods within the Startup. However this still limits your middleware to the confines of your project.

in most cases, you will want to make the middleware code self-contained and reusable and take advantage of getting the dependency injection container do the work of resolving instances without having to explicitly call the container (as we’ve had to in the above example).

This is where the other two styles of writing middleware come into their own by writing classes that encapsulate the functionality that will be called by the pipeline.

Factory Style Middleware

Now I have a confession. I have been using .NET Core for quite a while now and, until recently, this had passed me by.

This may be because I learnt about middleware with ASP.NET Core 1.1 and factory style middleware was not a ‘thing’ until ASP.NET Core 2.0,  when it was introduced without any fanfare (it was not included in the What’s New in ASP.NET Core for either version 2.0 or 2.1).

It was not until I recently read Khalid Abuhakmeh’s blog post that then led be to read the this Microsoft Docs page that I even became aware of factory style middleware.


If you are interested in how the the factory style was introduced into ASP.NET Core, have a look at this Git issue which shows how it evolved with a lot of commentary from David Fowler


The introduction of factory style middleware brings the following benefits over convention style middleware:

  • Activation per request, so allows injection of scoped services into the constructor
  • The explicitly defined InvokeAsync method on the interface moves away from the unstructured Invoke/InvokeAsync methods in convention style middleware that allowed additional parameters to be added
  • It is more aligned with the way you write other classes in your application as it is based on the premise of constructor injection over method parameter injection
  • Whilst there is a default implementation of IMiddlewareFactory that gets registered by default by the default WebHostBuilder, you can write your own implementation of IMiddlewareFactory that may be of use if you are using another container such as AutoFac where you may want to resolve instances using functionality that is not present in the out-of-the-box Microsoft implementation of IServiceProvider

So how do we implement the factory style of middleware ?

The key to this is by writing your middleware class as an implementation of the IMiddleware interface which has a single method, InvokeAsync.

The method has two incoming parameters

  • an instance of  HttpContext  this holds all the request/response information
  • a RequestDelegate instance that provides the link to the next middleware in the pipeline to call

Inside the method is where you can do whatever it is that you need to do in your custom middleware. This may be  either

  • intercepting the request-response chain to do something before and after the next middleware in the chain, such as logging or manipulating the request (or response), or
  • acting as terminating middleware that sends a response back (such as the static file middleware) and therefore does not proceed to the next middleware in the pipeline (unless it cannot handle the request).

Lastly, if you are not terminating the pipeline in your middleware, you need to ensure that the request delegate is invoked, passing the HttpContext instance to it as a parameter.

 

In the above example, I have taken the in-line style we used earlier and refactored it into its own factory-style middleware by implementing the IMiddleware interface. By doing this, I could put it into its own project and share between multiple other projects, avoiding the repetition of the in-line style.

We also benefit from not having to get the instance of ILogger<T> from the context parameter (so avoiding the service locator anti-pattern) and also have a proper type to use with the logger (as the Startup type felt wrong in the in-line style).

To use the factory style middleware, there are two things that need to be done to use it in your application.

The first, as with all middleware (regardless of style), is to register the middleware in the Configure method of your Startup class as discussed above (alternatively, you may want to use the Configure method directly on the host builder)

The next step is to go back up in to the ConfigureServices method (either in the StartUp class or directly on the HostBuilder) and register the middleware class with the dependency injection container.

To do this, you need to adhere to two simple rules

  1. You need to register the middleware with the concrete type as the service, not the IMiddleware interface (or any other interface it may implement)
  2. Think very carefully lifetime of the registration – for now, we will assume that the lifetime will be scoped (as opposed to transient or singleton) as the purpose of factory middleware is that is is invoked per request.

The second point is important as you could be in danger of creating a captured dependency if you register your middleware as a singleton, but the service types of the parameters in the constructor have shorter lifetimes.

Convention Style Middleware

Convention style middleware is the way you will find most examples are written and indeed how most Microsoft written middleware works (and as per the comments in the Git issue, remains for backward compatibility).

As it is well documented in many places on the internet (and of course, in the Microsoft Docs – see Writing Custom ASP.NET Core Middleware), I will concentrate on the key differences with factory style middleware here

The first obvious difference is that the class does not have to implement a specific interface (E.g. IMiddleware). Instead, you expected to adhere to a convention of implementing one of two methods Invoke or InvokeAsync.

  • Both methods have the same signature, the naming choice is up to you, though given that both return a Task, it is usual to append Async to asynchronous method names
  • You cannot have both Invoke and InvokeAsync methods – this will cause an exception to be thrown
  • The first parameter must be of type HttpContext – if this is not present, an exception will the thrown
  • The return type must be a Task

So given that the RequestDelegate for the next delegate is no longer passed into the InvokeAsync as a parameter, we need to obtain it from somewhere else. As we are adhering to a convention, we should stick with constructor injection and have the next delegate injected here.

We are still missing our ILogger<LogUserAgentConventionalMiddleware> instance, but this is where things get a bit more complex.

When it comes to interactions with dependency injection, the key thing to be aware of is that the instance of our convention-style middleware class is not created by the dependency injection container, even if you register the class in ConfigureServices. Instead, the UseMiddleware extension method uses a combination of reflection and the ActivatorUtilities class to create the instance – once, and only once!  – so it is effectively a singleton.

The reason for this is that the code that is of interest to the middleware pipeline is the Invoke/InvokeAsync method, as it is a call to this will be wrapped inside the anonymous function that gets passed to the Use method in the IApplicationBuilder instance. In other words, creating the class instance is a stepping stone to creating the delegate and once created, the class constructor is never interacted with again.

Why does this matter? It comes back to understanding how we obtain dependencies in our custom middleware.

If we specified dependencies in the constructor that have been registered with shorter lifetime that singleton (transient and scoped), we end up with captured dependencies that are locked until the singleton is released (which in the case of middleware is when the web application shuts down).

If you require transient or scoped dependencies, these should be added as parameters to the Invoke/InvokeAsync method.

When the UseMiddleware scans the class through reflection, it does several things such as validating the class adheres to the expected conventions, but the main thing we are interested in is the mapping of our middleware’s Invoke or InvokeAsync method to the RequestDelegate signature required by the ApplicationBuilder’s Use method.

The mapping is decided by checking if the signature of the Invoke/InvokeAsync method exactly matches the RequestDelegate signature (e.g. it only requires a single parameter of type HttpContext), it will use the method directly as the RequestDelegate function

Otherwise, it will create a wrapping function that matches the RequestDelegate signature, but that then uses a the ActivatorUtilities class to  resolve the method parameters from the dependency injection container accessed via the ApplicationBuilder’s ApplicationServices property.

You can view this from the source code here

At this point, once the UseMiddleware has mapped the RequestDelegate that represents our class’s middleware invocation method into the ApplicationBuilder, our middleware is baked into the pipeline.

Comparison

Whether you use convention or factory based middleware is a design decision to be made based on what your middleware is trying to achieve and what dependencies it needs resolved from the dependency injection container.

  • In-line style is find for ‘quick and dirty’ middleware that you do not plan to reuse and is either terminating or does very little when intercepting the pipelines with few dependencies
  • If there are many scoped or transient dependencies, you may want to consider the Factory-style approach for coding simplicity as it aligns with the constructor injection pattern that you are probably more familiar with for getting dependencies from the container and if registered with the correct scope, avoids captured dependencies
  • If there are no dependencies, or you know the only dependencies are guaranteed to have  a singleton lifetime, you may lean towards convention-style middleware as these can be injected into the container when the pipeline is first built and, as there are no additional parameters to the InvokeAsync method, the method can be used as a direct match to the RequestDelegate function that gets used in the pipeline
  • If you are already familiar with using convention-style middleware and specifying transient and scoped dependencies in the Invoke/InvokeAsync  parameter list, there is no pressing need to change to the factory-style approach.

Conclusion

I hope this post has been of use. If so, please spread the word by linking to it on Twitter and mentioning me @stevetalkscode.

I have created a demo solution at https://github.com/stevetalkscode/middlewarestyles  which you can download and have a play with the different styles of writing middleware and see the effects of different dependency injection lifetime registrations for the factory style vs the singleton captured in conventional style middleware.

I plan to revisit this topic in a future post to dig deeper into the how the different styles can affect start up and request performance and also the memory usage (which in turn affects garbage collection) which may sway the decision between using factory or convention style way of writing middleware.

Introducing Strongly Typed HTTP Request Headers for ASP.NET Core

In this first part of a series of occasional posts, I discuss the thinking behind taking  string based HTTP Headers and presenting them to your .NET code via dependency injection as strongly typed objects.

Background

If you have read my previous blog posts or seen my talks, you will be aware that I am a big fan of the configuration binding functionality in .NET Core (now .NET 5) that takes string key/value pairs stored in various ways (such as JSON or environmental variables) and binding them to a typed object that is exposed via the .NET Dependency Injection container.

Recently, I have been giving some thought to other areas in ASP.NET Core that would benefit from this ability to bind string data into object instances, in particular looking at HTTP request headers.

The IHeaderDictionary Interface

In ASP.NET Core, HTTP request headers are presented as an instance of the IHeaderDictionary interface which is primarily a (dictionary) collection of key/value pairs made up of a string value for the key (the header key) and the header values within a StringValues struct (which itself is a special collection of strings that “represents zero/null, one, or many strings in an efficient way“).

The purpose of the interface is to present a collection of HTTP headers in a consistent manner. HTTP headers can have multiple declarations within the incoming request (see the RFC specification) which need to be parsed and grouped together by key before being presented in the collection.

Whilst the interface provides a consistent manner to access the headers, if you want interrogate the incoming request headers, you have a few hoops to jump through, namely

In other words you need code to get to HttpContext.Request.Headers.

If you have several headers that you want to access, you have to repeat this process for each of the headers you require.

Parsing and Validating Header Values

Once you have your value(s) for a header, these are still strings that you may want to convert into some other type such as integer or GUID, which then means having to parse the value(s) which then raises a number of other questions:

  • What should be the default if the header is not present?
  • What to do if the header is present, but the value is not in the correct format to parse to the required type?
  • What to do if only expecting a single value, but multiple values are presented – first or last in wins?
  • What to do if expecting multiple values and a single value is presented
  • What to do if the value(s) can be parsed, but fail any domain validation that needs to be applied (E.g. integer value must be within a domain range)
  • If any of the above cannot be worked around, how can an error be safely marshalled back to the caller without raising an exception?

In each of these scenarios, there is the potential for an exception to be thrown if assumptions are made about the incoming data and guards are not put in place to handle non-expected scenarios (E.g. using TryParse instead of Parse when converting strings to other primitive types to avoid exceptions when parsing fails).

Assuming all of the above is working correctly and you have managed to get the value(s) you require out of the headers, there is the question of passing the data from the controller action (or middleware) to some domain model or function.

Ideally, your domain logic and entities should avoid any bindings to web related types such as IHeaderDictionary & HttpContext. Avoiding this coupling to web concepts means they can be used in other applications (console, desktop or mobile applications) where the values that are received via the headers in a web application/API service may (or may not) be obtained in other ways (user input, configuration or hard coded).

Primitive Obsession

I recently read Andrew Lock’s blog about primitive obsession where he discusses the danger of passing values around as standard data type (string, integer, GUID et al) as these are interchangeable values that do not have context.

The post goes on to put forward an implementation of wrapping these values into ‘strong types’ that give context to a value. E.g. a customer id value expressed as a GUID is interchangeable with a product id that is also a GUID, but an instance of a CustomerIdentity type is not interchangeable with a ProductIdentity.

After having read Andrew’s series that shows how to create strong types that are struct based, I then went to to read Thomas Levesque’s series that is based on Andrew’s series, but this time implementing using C# 9 record types in .NET 5.

I highly recommend reading both of these.

The principle I want to carry through is that each HTTP header of interest to the application should be mapped into a strongly typed instance to give the value(s) some meaning and context beyond just being an incoming value. In addition, these types should abstract away the need for callers to have any knowledge of HTTP related classes and interfaces.

Replacing Commonly Written Code with a Library

With all of the above in mind, I have started a library to remove the need to keep writing a lot of the commonly required code. The requirements I have are

  • To remove the need for any of my controller or domain code to need to have knowledge of HTTP headers to retrieve the values that have been supplied in a request. Instead I want strongly typed values to be available from the Dependency Injection container that encapsulate one or more pieces of data that have some meaning to the application
  • To have a standard generic factory process to create the strong types using a standard signature that provides a collection of zero, one or many strings for a particular header name to be provided by the factory
  • For the factory process to have a simple fluent syntax that can be used within the container registration process
  • For the factory to be able to consume an expression that can take that collection of string values and decide how to map the values (or lack of values) into an instance of a type that can be consumed elsewhere
  • For the constructed types to encapsulate not just the incoming value(s), but also be able to express a lack of value(s) or an invalid state due to invalid value formats (as raising exceptions during the factory process will be difficult to catch)
  • For the constructed types to be automatically registered as scoped lifetime services in the container for use by controllers
  • For the constructed types to be available via a singleton that is aware of the async context so that can be injected into standard middleware constructors (by encapsulating the HttpContextAccessor).

The Code

The library is still an initial work in progress at time of writing, but if you are interested in how I have approached the above, the code is available in a repository at https://github.com/stevetalkscode/TypedHttpHeaders

Coming Up

As I progress with the library, I plan to have more posts in this series where I will be walking through how the code works in order to achieve the goals above and eventually hope to release a NuGet package.

Understanding Disposables In .NET Dependency Injection – Part 3

Following on from Parts 1 and 2, in this final part of the series, I move on to dealing with types that you do not have source control for and therefore cannot change directly to hide the Dispose method using the techniques I have described in the previous posts.

Background

In the previous two parts of this series, I have made the assumption that you are able to amend the source code for types that implement IDisposable.

But what happens, if you don’t have the source code? This is where some well known design patterns come in useful.

Design Patterns

When trying to hide disposability from container consumers, the general principle (as shown in the previous parts of this series), is to use an interface that excludes the dispose method from its definition so that the consumer only receives the instance via the interface and not as the concrete class, thus hiding the Dispose method.

If we do not have the source code, we need to create an intermediary between the interface that we want to expose and the type that we want to use.

There are four classic design patterns , each a variation of the intermediary theme, that we can use together to achieve our goal.

I will not go into an in depth description of the patterns here as there are plenty of resources that can be a much better job, however, here is a brief overview.


Adapter and Bridge Patterns

You may already have another interface, either your own or some other third party that achieves the goal, but for which you do not have the source code or cannot change as it would break other dependencies. In this case, the adapter pattern is used to fill the gaps to make the two interfaces work with each other.

If the desired interface does not already exist, this is becomes the bridge pattern which does the same thing, but the interface design is within your control (whereas adapter uses an interface outside your control)

Façade Pattern

The core aim of our intermediary is to remove the Dispose method and in effect simplify the interface. If the third party has a number of members that you are not interested in or that need to be brought together into a single method, the Façade pattern can be used.

Proxy Pattern

The proxy pattern is used to prevent direct access to an object. It usually has an identical interface to the class that it represents.

Decorator Pattern

The decorator pattern is similar to the proxy pattern, but it can may additional functionality to enhance behaviour.


For our purposes, you are likely to use an adapter pattern if you do not have control over either interface, but more likely, you will be designing the interface to be used by consumers and therefore will write a custom class that brings together elements of the other three patterns, namely

  • Bridge – we will be creating one between the desired interface to receive calls and the target interface
  • Façade – we will be simplifying the interface to remove any members not needed
  • Proxy – passing through calls to members of our class on to the instance that we are hiding
  • Decorator – we may be taking the opportunity to do some additional work such as logging calls

Putting It Together

In the example below we have a class SomeDisposable that we do not have the source code for. The class implements IDisposable and has multiple methods, but only one of interest, namely – DoSomething().

Rather than register the class directly, we want to wrap it with an intermediary that will

  • Create an instance of DoSomething inside the constructor (as we do not want to register the DoSomething class with the container)
  • Implement a simplified interface (façade) of just the one DoSomething() method
  • Have a Dispose method (that is not exposed via the interface) that the container will call to proxy to the inner object’s Dispose() method
  • Decorate the inner DoSomething method with a call to the logger to log that the method has been called and when it has completed

Once we have the intermediary class, we can register this with the container in the StartUp class with the façade interface as the service type.

With this in place, we have managed to avoid consumers being able to dispose of the DoSomething singleton instance directly as it is hidden inside the intermediary class but the intermediary class (and in turn the inner instance) is still disposable by the container, but having put a façade in place, the consumer cannot call Dispose() directly.

Conclusion

That is the end of this series on preventing consumers causing problems by disposing of objects that may have a longer lifetime than the consumer when under the control of the .NET DI container.

I hope it has been of use.

Understanding Disposables In .NET Dependency Injection – Part 2

Following on from Part 1 where I provide an overview of hiding the Dispose method from consumers of the Dependency Injection container, in this part, I move on to dealing with objects that are created outside, but registered with the DI container.

Background

In part 1, I included the table below of extension methods that can be used to register services and implementations with the Microsoft DI container which indicates which of these will automatically dispose of objects that implement the IDisposable interface.

Method Automatic Disposal
Add{LIFETIME}<{IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}, {IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}>(sp => new {IMPLEMENTATION}) Yes
AddSingleton<{SERVICE}>(new {IMPLEMENTATION}) No
AddSingleton(new {IMPLEMENTATION}) No

In the last two of these methods, the instance is explicitly instantiated using the new keyword. Note, this style of registration is only available for Singletons and is not supported for Scoped or Transient lifetimes.

Wherever possible, if the class being registered implements IDisposable, I would encourage you not to use the last two methods, and instead use the third method where the instantiation takes place inside a lambda expression. This simple change of signature ensures that the object will be disposed by the container when its lifetime comes to an end.

As described in Part 1, I would also avoid registering the class as the service type as this will allow the consumer to call the Dispose method which can have unintended consequences, especially for singleton and scoped lifetime objects where the object may still be required by other consumers.

If for some reason it is not possible to use one of the other methods and a new instance must be instantiated outside of the container, there are ways of ensuring that the object is disposed of by the container.

Disposing Instantiated Singletons

If instances are instantiated as part of the container registration process (for example within the StartUp class’s ConfigureServices method), there is no natural place to dispose of these objects and therefore, they will live for the during of the application.

In most cases, this is not a major problem as, if written correctly, they will be disposed of when the application ends. However, we should try to explicitly clean up after ourselves when using unmanaged resources, but in this case, how?

When using ASP.NET Core, the Host takes care of registering a number of services for you that you may not be aware of.

One of the services that gets registered is IHostApplicationLifetime. This has three properties which return cancellation tokens which can be used to register callback functions that will be triggered when the host application starts, is about to stop and finally stops.

With this in place, we can register a callback to our StartUp class to dispose of our objects when the application is about to stop.

In order to do this, we can approach this in one of two ways, depending on where and how the object has been created during registration.

Startup Class Scoped Variable

If the instance has been created in the constructor of the StartUp class and assigned to a class level variable, this variable can be used to dispose of the object within the callback registered with the ApplicationStopping token returned from the IHostApplicationLifetime.

If you have several disposable singleton instances created in this manner, they can all be disposed of within the one method.

Instance Created Inside the ConfigureServices Method

If the instance has been created ‘on-the-fly’ within the registration and not captured in a class variable, we will need to obtain that instance from the container in order to dispose of it.

In order to obtain the instance, we need to request that instance within the Configure method in the StartUp class which gets called after the container has been created.

If you have several disposable singleton instances created in this manner, they can all be disposed of within the one method. However, obtaining these instances becomes a bit messy as you need to request them all either in the Configure methods parameter signature (which can become quite lengthy if more than a couple of types are required) or use the IApplicationBuilder’s ApplicationServices property to get access to the container’s IServiceProvider instance and use the GetService method to obtains the container registered instances.

Next Time

In Part 3 of this series, I will discuss hiding the Dispose method using intermediate classes based on common design patterns.

Understanding Disposables In .NET Dependency Injection – Part 1

In this post I will be discussing the traps that can catch you out by potentially creating memory leaks when registering types that implement the IDisposable interface as services with the out-of-the-box .NET Dependency Injection container.

Background

Typically, a type will implement the IDisposable interface when it holds unmanaged resources that need to be released or to free up memory.

More information about cleaning up resource can be found on Microsoft Docs

To keep things simple for the rest of this post, I will be referring to instances of types that implement IDisposable as “Disposable Objects”.

Managing Disposable Objects without Dependency Injection

Outside of dependency injection, if you create an instance of such a type, it is your responsibly to call the Dispose method on the class to initiate the release of unmanaged resources.

This can be done either

The two approaches are documented on the Microsoft Docs site.

Disposable Objects Created by the Dependency Injection Container

As a general rule, if the Dependency Injection container creates an instance of the disposable object, it will clean up when the instance lifetime (transient, scoped or singleton) expires (E.g. for scoped instances in ASP.NET Core, this will be at the end of the request/response lifetime but for singletons, it is when the container itself is disposed).

The following table (based on the table in the Microsoft Docs page) shows which registration methods will trigger the container to automatically dispose of the object.

Method Automatic Disposal
Add{LIFETIME}<{IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}, {IMPLEMENTATION}>() Yes
Add{LIFETIME}<{SERVICE}>(sp => new {IMPLEMENTATION}) Yes
AddSingleton<{SERVICE}>(new {IMPLEMENTATION}) No
AddSingleton(new {IMPLEMENTATION}) No

As you can see from the table above, the three most common methods for adding services, where the container itself is responsible for creating the instance, will automatically dispose of the object at the appropriate time.

However, the last two methods do not dispose of the object. Why? It’s because in these methods, the objects have been directly instantiated with a new keyword and therefore, the container has not been responsible for creating the object.

Whilst they look similar to the third method, the difference is that the instance in that method has been created within the context of a lambda expression which is within the control of the container and therefore in the container’s control.

In the last two methods, the object could be created at the time of registration (by using the new statement) but then again, it may have been created outside these methods (either within the scope of the ConfigureServices method in the StartUp class, or at a class level) and therefore, the container cannot possibly know of where the object has been created, the scope of its reference,  and where else it may be used. Without this understanding, it cannot safely dispose of the object as this may throw an ObjectDisposedException if referenced elsewhere in code after the container has disposed of it.

I will come on to dealing with ensuring these objects referenced in these last two methods can be disposed of correctly in Part 2.

Hiding Disposability from Container Consumers

The first method in the table above is the simplest way to register a type. Consumers will request an instance of the object and make use of it.

However, if the type implements IDisposable, this means that the Dispose method is available to the consumer to call. This has repercussions depending on the lifetime that the dependency has been registered as.

For transients that have been created specifically to be injected into the consuming class, it is not the end of the world. If dispose is called on a transient, the only place that will suffer is the consuming class (and anything it passes the reference to) as any subsequent references to the object (or to be more specific, members in the type that check the disposed status) are likely to result in an ObjectDisposedException (this will depend on the implementation of the injected class).

For scoped and singleton lifetimes, things become more complicated as the object has a lifetime beyond the consumer class. If the consuming class calls Dispose and another consumer then also makes use of a member on the disposed class, that other consumer is likely to receive an ObjectDisposedException.

Therefore, we want to ensure that the Dispose method on the registered class is somehow hidden from the consumer.

There are several ways of hiding the Dispose method which are considered below

Explicit Implementation of IDisposable

The quick (and dirty) way of hiding the Dispose method that exists on a class is to change the Dispose method’s declaration from a public method to an explicit interface declaration (as shown below) so that it can only be called by casting the object to IDisposable.

It should, however, be recognised that this is just obfuscating the availability of the Dispose method. It does not truly hide it as the consumer may be aware that the type implements IDisposable and explicitly cast the object and call Dispose.

This is where extracting out other interfaces comes to our rescue when it comes to dependency injection.

Register the Implementation Type With a More Restrictive Interface

If we define an interface that has all the public members of our class except for the Dispose method and only make the object available by registering it in the DI container with the limited interface as the service, this will make it harder (but not completely impossible) for the consumer of the object to dispose of the object as the concrete type is only known to the container registration (unless the consumer uses GetType() of course, but that is splitting hairs and in many ways negates the whole point of using the container).

Of course, following the Interface Segregation Principle from SOLID, this interface may be broken down into smaller interfaces which the class registered against.

Next Time …

In Part 2 of this series on IDisposable in Dependency Injection, I will move on to dealing with those objects that the container will not dispose of for you.

Simplifying Dependency Injection with Functions over Interfaces

In my last post, I showed how using a function delegate can be used to create named or keyed dependency injection resolutions.

When I was writing the demo code, it struck me that the object orientated code I was writing seemed to be bloated for what I was trying to achieve.

Now don’t get me wrong, I am a big believer in the SOLID principles, but when the interface has only a single function, having to create multiple class implementations seems overkill.

This was the ‘aha’ moment you sometimes get as a developer where reading and going to user groups or conferences plants seeds in your mind that make you think about problems differently. In other words, if the only tool you have in your tool-belt is a hammer, then every problem looks like a nail; but if you have access to other tools, then you can choose the right tool for the job.

Over the past few months, I have been looking more at functional programming. This was initially triggered by the excellent talk ‘Functional C#’ given by Simon Painter (a YouTube video from the Dot Net Sheffield user group is here, along with my own talk – plug, plug, here).

In the talk, Simon advocates using functional programming techniques in C#. At the end of the talk, he recommends the Functional Programming in C# book by Enrico Buonanno. It is in Chapter 7 of this book that the seed of this blog was planted.

In short, if your interface has only a single method that is a pure function, register a function delegate instead of an interface.

This has several benefits

  • There is a less code to write
  • The intent of what is being provided is not masked by having an interface getting in the way – it is just the function exposed via a delegate
  • There is no object creation of interface implementations, so less memory allocations and may be faster to initially execute (as there is no object creation involved)
  • Mocking is easier – you are just providing an implementation of a function signature without the cruft of having to hand craft an interface implementation or use a mocking framework to mock the interface for you.

So with this in mind, I revisited the demo from the last post and performed the following refactoring:

  • Replaced the delegate signature to perform the temperature conversion instead of returning an interface implementation that has a method
  • Moved the methods in the class implementations to static functions within the startup class (but could easily be a new static class)
  • Change the DI registration to return the results from the appropriate static function instead of using the DI container to find the correct implementation and forcing the caller to execute the implementation’s method.

As can be seen from the two code listings, Listing 1 (Functional) is a lot cleaner than Listing 2 (Object Orientated).

Listing 1

Listing 2

Having done this, it also got me thinking about how I approach other problems. An example of this is unit-testable timestamps.

Previously, when I needed to use a timestamp to indicate the current date and time, I would create an interface of ICurrentDateTime that would have a property for the current datetime. The implementation for production use would be a wrapper over the DateTime.Now property, but for unit testing purposes would be a mock with a fixed date and time to fulfill a test criteria.

Whilst not a pure function, the same approach used above can be applied to this requirement, by creating a delegate to return the current date and time and then registering the delegate to return the system’s DateTime.Now.

This achieves the same goal of decoupling code from the system implementation via an abstraction, but negates the need to create an unnecessary object and interface to simply bridge to the underlying system property.

If you are interested in looking at getting into functional programming while staying within the comfort zone of C#, I highly recommend Enrico’s book.

The demo of both the OO and Functional approaches can be found in the GitHub project at https://github.com/configureappio/NamedDiDemo.

 

 

 

Clean Architecture – Should I Move the Startup Class to Another Assembly?

I was recently listening to an episode of the brilliant .Net Rocks where Carl and Richard were talking to Steve Smith (a.k.a @ardalis) in which he talks about clean architecture in ASP.Net Core.

One of the things discussed was the separation of concerns, where Steve discusses creating an architecture in which you try to break up your application in such a way that hides implementation detail in one project from another consuming project. Instead, the consuming project is only aware of interfaces or abstract classes from shared libraries from which instances are are created by the dependency injection framework in use.

The aim is to try and guide a developer of the consuming project away from ‘new-ing’ up instances of a class from outside the project. To use Steve’s catch phrase, “new is glue”.

I was listening to the podcast on my commute to work and it got me thinking about the project I had just started working on. So much so, that I had to put the podcast on pause to give myself some thinking time for the second half of the commute.

What was causing the sparks to go off in my head was about how dependencies are registered in the Startup class in ASP.Net Core.

By default, when you create a new ASP.Net Core project, the Startup class is created as part of that project, and it is here that you register your dependencies. If your dependencies are in another project/assembly/Nuget package, it means that the references to wherever the dependency is has to be added to the consuming project.

Of course, if you do this, that means that the developer of the consuming project is free to ‘new up’ an instance of a dependency rather than rely on the DI container. The gist of Steve Smith’s comment in the podcast was do what you can to help try to prevent this.

When I got to work, I had a look at the code and pondered about whether the Startup class could be moved out to another project. That way the main ASP.Net project would only have a reference to the new project (we’ll call it the infrastructure project for simplicity) and not the myriad of other projects/Nugets. Simple huh? Yeah right!

So the first problem I hit was all the ASP/MVC plumbing that would be needed in the new project. When I copied the Startup class to the new project, Visual Studio started moaning about all the missing references.

Now when you create a new MVC/Web.API project with .Net Core, the VS template uses the Microsoft.AspNetCore.All meta NuGet package. For those not familiar with meta packages, these are NuGet packages that bundle up a number of other NuGet packages – and Microsoft.AspNetCore.All is massive. When I opened the nuspec file from the cache on my machine, there were 136 dependencies on other packages. For my infrastructure project, I was not going to need all of these. I was only interested in the ones required to support the interfaces, classes and extension methods I would need in the Startup class.

Oh boy, that was a big mistake. It was a case of adding all the dependencies I would actually need one by one to ensure I was not bringing any unnecessary packages along for the ride. Painful, but I did it.

So I made all the updates to the main MVC project required to use the Startup class from my new project and remove the references I previously had to other projects (domain, repository etc) as this was the point of the exercise.

It all compiled! Great. Pressed F5 to run and … hang on what?

404 when MVC controller not found
404 when MVC controller not found

After a bit of head scratching, I realised the problem was that MVC could not find the controller? WHY?

At this point, I parked my so-called ‘best practice’ changes as I did not want to waste valuable project time on a wild goose chase.

This was really bugging me, so outside of work, I started to do some more digging.

After reading some blogs and looking at the source code in GitHub, the penny dropped. ASP.Net MVC makes the assumption that the controllers are in the same assembly as the Startup class.

When the Startup class is registered with the host builder, it sets the ApplicationName property in HostingEnvironment instance to the name of the assembly where the Startup class is.

The ApplicationName property of the IHostingEnvironment instance is used by the AddMvc extension to register the assembly where controllers can be found.

Eventually, I found the workaround from David Fowler in an answer to an issue on GitHub. In short, you need to use the UseSetting extension method on the IWebHostBuilder instance to change the assembly used in the ApplicationName property to point to where the controllers are. In my case this was as follows:

UseSetting(WebHostDefaults.ApplicationKey, typeof(Program).GetTypeInfo().Assembly.FullName)

Therefore, without this line redirecting the application name to the correct assembly, if the controllers are not in the same assembly as the Startup class, that’s when things go wrong – as I found.

With this problem fixed, everything fell into place and started working correctly.

However, with this up and running, something did not feel right about it.

The solution I had created was fine if all the dependencies are accessible from the new Infra project, either directly within the project or by referencing other projects from the Infra project. But what if I have some dependencies in my MVC project I want to add to the DI container?

This is where my thought experiment broke down. As it stood, it would create a circular reference of the Infra project needing to know about classes in the main MVC project which in turn referenced the Infra project. I needed to go back to the drawing board and think about what I was trying to achieve.

I broke the goal into the following thoughts:

  1. The main MVC project should not have direct references to projects that provide services other than the Infra project. This is to try to prevent developers from creating instances of classes directly
  2. Without direct access to those projects, it is not possible for the DI container to register those classes either if the Startup is in the main MVC project
  3. Moving the Startup and DI container registration to the Infra project will potentially create circular references if classes in the MVC project need to be registered
  4. Moving the Startup class out of the main MVC project creates a need to change the ApplicationName in the IHostingEnvironment for the controllers to be found
  5. Moving the Startup class into the Infra project means that the Infra project has to have knowledge of MVC features such as routing etc. which it should not really need to know as MVC is the consumer.

By breaking down the goal, it hit me what is required!

To achieve the goal set out above, a hybrid of the two approaches is needed whereby the Startup and DI container registration remain in the main MVC project, but registration of classes that I don’t want to be directly accessed in the MVC project get registered in the Infra project so access in the MVC project is only through interfaces, serviced by the DI container.

To achieve this, all I needed to do was make the Infra project aware of DI registration through the IServiceCollection interface and extension methods, but create a method that has the IServiceCollection injected into it from the MVC project that is calling it.

Startup Separation

The first part of the process was to refactor the work I has done in the Startup class in the Infra project and create a public static method to do that work, taking the dependencies from outside.

The new ConfigureServices method takes an IServiceCollection instance for registering services from within the infrastructure project, and also an IMvcBuilder as well, so that any MVC related infrastructure tasks that I want to hide from the main MVC project (and not dependent on code in the MVC project) can also be registered.

In the example above, I add a custom validation filter (to ensure all post-back check if the ModelState is valid rather than this being done in each Action in the MVC controllers)  and add the FluentValidation framework for domain validation.

To make things a bit more interesting, I also added an extension method to use Autofac as the service provider instead of the out of the box Microsoft one.

With this in place, a took the Startup class out of the Infra project and put it back into the MVC project and then refactored it so that it would do the following in the ConfigureServices method:

  • Perform any local registrations, such as AddMvc and any classes that are in the MVC project
  • Call the static methods created in the Infra project to register classes that are hidden away from the MVC project and use Autofac as the service provider.

I ended up with a Startup class that looked like this:

The Full Example

My description of the problem and my solution above only really scratches the surface, but hopefully it is of use to someone. It is probably better to let the code speak for itself, and for this I have created a Git repo with three versions of an example project which show the three different approaches to the problem.

First is the out-of-the-box do everything in the main project

Then there is the refactoring to do all the registration in the Infra project

Lastly, there is the hybrid where the Startup is in the main project, but delegates registration of non-MVC classes to the Infra project.

The example projects cover other things I plan on blogging about, so are a bit bigger that just dealing with separating the Startup class.

The repo can be found at https://github.com/configureappio/SeparateStartup

For details of the projects, look at the Readme.md file in the root of the repo.

Conclusion

In answer to the question posed in the title of this post, my personal view is that the answer is – “No” … but I do think that extracting out a lot of plumbing away from the Startup into another assembly does make things cleaner and achieves the goal of steering developers away from creating instances of classes and instead, relying on using the DI container to do the work it is intended for. This then helps promote SOLID principles.

Hopefully, the discussion of the trials and tribulations I had in trying to completely move the Startup.cs class show how painful this can be and how a hybrid approach is more suitable.

The underlying principle of using a clean code approach is sound when approached the correct way, by thinking through the actual goal rather than concentrating on trying to fix or workaround the framework you are using.

The lessons I am taking away from my experiences above are:

  • I am a big fan of clean architecture, but sometimes it is hard to implement when the frameworks you are working with are trying to make life easy for everyone and make assumptions about your code-base.
  • It is very easy to tie yourself up in knots when you don’t know what the framework is doing under the bonnet.
  • If in doubt, go look at the source code of the framework, either through Git repos or by using the Source Stepping feature of Visual Studio.
  • Look at ‘what’ you are trying to achieve rather than starting with the ‘how’ – in the case above, the actual goal I was trying to achieve was to abstract the dependency registration out of Startup rather than jumping straight in with ‘move whole of the Startup.cs’.