C# 9 Record Factories

With the release of .NET 5.0 and C# 9 coming up in the next month, I have been updating my Dependency Injection talk to incorporate the latest features of the framework and language.

One of the topics of the talk is using the “Gang of Four” Factory pattern to help with the creation of class instances where some of the data is coming from the DI container and some from the caller.

In this post, I walk through using the Factory Pattern to apply the same principles to creating instances of C# 9 records.

So How Do Records Differ from Classes?

Before getting down into the weeds of using the Factory Pattern, a quick overview of how C#9 record types differ from classes.

There are plenty of articles, blog posts and videos available on the Internet that will tell you this, so I am not going to go into much depth here. However, here are a couple of resources I found useful in understanding records vs classes.

Firstly, I really like the video from Microsoft’s Channel 9 ‘On.NET’ show where Jared Parsons explains the benefits of records over classes.

Secondly, a shout out to Anthony Giretti for his blog post https://anthonygiretti.com/2020/06/17/introducing-c-9-records/ that goes into detail about the syntax of using record types.

From watching/reading these (and other bits and bobs around the Internet), my take on record types vs. classes are as follows:

  • Record types are ideal for read-only ‘value types’ such as Data Transfer Objects (DTOs) as immutable by default (especially if using positional declaration syntax)
  • Record types automatically provide (struct-like) value equality implementation so no need to write your own boiler plate for overriding default (object reference) equality behaviour
  • Record types automatically provide a deconstruction implementation so you don’t need to write your own boiler plate for that.
  • There is no need to explicitly write constructors if positional parameters syntax is used as object initialisers can be used instead

The thing that really impressed me when watching the video is the way that the record syntax is able to replace a 40 line class definition with all the aforementioned boiler plate code with a single  declaration

Why Would I Need A Factory?

In most cases where you may choose to use a record type over using a class type, you won’t need a factory.

Typically you will have some framework doing the work for you, be it a JSON deserialiser, Entity Framework or some code generation tool to create the boilerplate code for you (such as NSwag for creating client code from Swagger/OpenAPI definitions or possibly the new C#9 Source Generators).

Similarly, if all the properties of your record type can be derived from dependency injection, you can register your record type with the container with an appropriate lifetime (transient or singleton).

However, in some cases you may have a need to create a new record instance as part of a user/service interaction by requiring data from a combination of

  • user/service input;
  • other objects you currently have in context;
  • objects provided by dependency injection from the container.

You could instantiate a new instance in your code, but to use Steve ‘Ardalis’ Smith‘s phrase, “new is glue” and things become complicated if you need to make changes to the record type (such as adding additional mandatory properties).

In these cases, try to keep in line with the Don’t Repeat Yourself (DRY) principle and Single Responsibility Principle (SRP) from SOLID. This is where a factory class comes into its own as it becomes a single place to take all these inputs from multiple sources and use them together to create the new instance, providing a simpler interaction for your code to work with.

Creating the Record

When looking at the properties required for your record type, consider

  • Which parameters can be derived by dependency injection from the container and therefore do not need to be passed in from the caller
  • Which parameters are only known by the caller and cannot be derived from the container
  • Which parameters need to be created by performing some function or computation using these inputs that are not directly consumed by the record.

For example, a Product type may have the following properties

  • A product name – from some user input
  • A SKU value  – generated by some SKU generation function
  • A created date/time value – generated at the time the record is created
  • The name of the person who has created the value – taken from some identity source.

The record is declared as follows

You may have several places in your application where a product can be created, so adhering to the DRY principle, we want to encapsulate the creation process into a single process.

In addition, the only property coming from actual user input is the product name, so we don’t want to drag all these other dependencies around the application.

This is where the factory can be a major help.

Building a Record Factory

I have created an example at https://github.com/stevetalkscode/RecordFactory that you may want to download and follow along with for the rest of the post.

In my example, I am making the assumption that

  • the SKU generation is provided by a custom delegate function registered with the container (as it is a single function and therefore does not necessarily require a class with a method to represent it – if you are unfamiliar with this approach within dependency injection, have a look at my previous blog about using delegates with DI
  • the current date and time are generated by a class instance (registered as a singleton in the container) that may have several methods to choose from as to which is most appropriate (though again, for a single method, this could be represented by a delegate)
  • the user’s identity is provided by an ‘accessor’ type that is a singleton registered with the container that can retrieve user information from the current AsyncLocal context (E.g. in an ASP.NET application, via IHttpContextAccessor’s HttpContext.User).
  • All of the above can be injected from the container, leaving the Create method only needing the single parameter of the product name

(You may notice that there is not an implementation of GetCurrentTimeUtc. This is provided directly in the service registration process in the StartUp class below).

In order to make this all work in a unit test, the date/time generation and user identity generation are provided by mock implementations that are registered with the service collection. The service registrations in the StartUp class will only be applied if these registrations have not already taken place by making user of the TryAddxxx extension methods.

Keeping the Record Simple

As the record itself is a value type, it does not need to know how to obtain the information from these participants as it is effectively just a read-only Data Transfer Object (DTO).

Therefore, the factory will need to provide the following in order to create a record instance:

  • consume the SKU generator function referenced in its constructor and make a call to it when creating a new Product instance to get a new SKU
  • consume the DateTime abstraction in the constructor and call a method to get the current UTC date and time when creating a new Product instance
  • consume the user identity abstraction to get the user’s details via a delegate.

The factory will have one method Create() that will take a single parameter of productName (as all the other inputs are provided from within the factory class)

Hiding the Service Locator Pattern

Over the years, the Service Locator pattern has come to be recognised as an anti-pattern, especially when it requires that the caller has some knowledge of the container.

For example, it would be easy for the ProductFactory class to take a single parameter of IServiceProvider and make use of the GetRequiredService<T> extension method to obtain an instance from the container. If the factory is a public class, this would tie the implementation to the container technology (in this case, the Microsoft .NET Dependency Injection ‘conforming’ container).

In some cases, there may be no way of getting around this due to some problem in resolving an dependency. In particular, with factory classes, you may encounter difficulties where the factory is registered as a singleton but one or more dependencies are scoped or transient.

In these scenarios, there is a danger of the shorter-lived dependencies (transient and scoped) being resolved when the singleton is created and becoming ‘captured dependencies’ that are trapped for the lifetime of the singleton and not using the correctly scoped value when the Create method is called.

You may need to make these dependencies parameters of the Create method (see warning about scoped dependencies below), but in some cases, this then (mis)places a responsibility onto the caller of the method to obtain those dependencies (and thus creating an unnecessary dependency chain through the application).

There are two approaches that can be used in the scenario where a singleton has dependencies on lesser scoped instances.

Approach 1- Making the Service Locator Private

The first approach is to adopt the service locator pattern (by requiring the IServiceProvider as described above), but making the factory a private class within the StartUp class.

This hiding of the factory within the StartUp class also hides the service locator pattern from the outside world as the only way of instantiating the factory is through the container. The outside world will only be aware of the abstracted interface through which it has been registered and can be consumed in the normal manner from the container.

Approach 2 – Redirect to a Service Locator Embedded Within the Service Registration

The second way of getting around this is to add a level of indirection by using a custom delegate.

The extension methods for IServiceCollection allow for a service locator to be embedded within a service registration by using an expression that has access to the IServiceProvider.

To avoid the problem of a ‘captured dependency’ when injecting transient dependencies, we can register a delegate signature that wraps the service locator thus moving the service locator up into the registration process itself (as seen here) and leaving our factory unaware of the container technology.

At this point the factory may be made public (or left as private) as the custom delegate takes care of obtaining the values from the container when the Create method is called and not when the (singleton) factory is created, thus avoiding capturing the dependency.

Special warning about scoped dependencies

Wrapping access to transient registered dependencies with a singleton delegate works as both singleton and transient instances are resolved from the root provider. However, this does not work for dependencies registered as scoped lifetimes which are resolved from a scoped provider which the singleton does not have access to.

If you use the root provider, you will end up with a captured instance of the scoped type that is created when the singleton is created (as the container will not throw an exception unless scope checking is turned on).

Unfortunately, for these dependencies, you will still need to pass these to the Create method in most cases.

If you are using ASP.NET Core, there is a workaround (already partially illustrated above with accessing the User’s identity).

IHttpContextAcessor.HttpContext.RequestServices exposes the scoped container but is accessible from singleton services as IHttpContextAcessor is registered as a singleton service. Therefore, you could write a delegate that uses this to access the scoped dependencies via a delegate. My advice is to approach this with caution as you may find it hard to debug unexpected behaviour.

Go Create a Product

In the example, the factory class is registered as a singleton to avoid the cost of recreating it every time a new Product is needed.

The three dependencies are injected into the constructor, but as already mentioned the transient values are not captured at this point – we are only capturing the function pointers.

It is within the Create method that we call the delegate functions to obtain the real-time transient values that are then used with the caller provided productName parameter to then instantiate a Product instance.




I’ll leave things there, as the best way to understand the above is to step through the code at https://github.com/stevetalkscode/RecordFactory that accompanies this post.

Whilst a factory is not required for the majority of scenarios where you use a record type, it can be of help when you need to encapsulate the logic for gathering all the dependencies that are used to create an instance without polluting the record declaration itself.

Merging Multiple Git Repositories Into A Mono-Repo with PowerShell (Part 2)


Following on from Part 1 where I give the background as to the reasons that I wanted to move to a single Git repository (also known as a mono-repo), this post provides a walk-through of the PowerShell script that I created to do the job.

The full script can be found at on GitHub in the MigrateToGitMonoRepo repository. The script make use of three ‘dummy’ repos that I have also created there. In addition, it also shows how to include repositories from other URLs by pulling in an archived Microsoft repository for MS-DOS.

Before Running the Script

There are a few things to be aware of when considering using the script.

The first is that I am neither a PowerShell nor Git expert. The script has been put together to achieve a goal I had and has been shared in the hope that it may be of use to other people (even if it is just my future self). I am sure there are more elegant ways of using both these tools, but the aim here was to get the job done as it is a ‘one-off’ process. Please feel free to fork the script and change it as much as you want for your own needs with my blessing.

The second thing to know is that the Git command line writes information to StdErr and therefore, when running, a lot of Git information will appear in red. All this ‘noise’ does make it hard to identify genuine errors. To this end, when developing and running the script, I used the PowerShell ISE to add breakpoints and step through the execution of code so I could spot when things were going wrong.

The last thing to be aware of is that there is no error handling within the script. For example, if a repo can’t be found or a branch specified for merging is not present, you may have unexpected results of empty temporary directories being rolled forward and then appearing as errors when Git tries to move and rename those directories.

With this said, the rest of the post will focus on how to use the script and some things I learnt along the way while writing it.

Initialising Variables

At the start of the script there are a number of variables that you will need to set.

The $GitTargetRoot and $GitTargetFolder refer to the file system directory structure. You may not want to have a double nested directory structure you can override this further down in the script. The reason I did this is that I like to have a single root for all my Git repos on the file system (C:\Git) and then a directory per repo under this.

The $GitArchiveFolder and $GitArchiveTags will be used as part of the paths in the target repo to respectively group all the existing branches and existing tags together so that there is less ‘noise’ when navigating to branches and tags created post-merge.

If all the existing repositories have the same root URL it can be set in the $originRoot variable. This can be overridden later on in the script to bring in repositories from disparate sources (in the script example, we pull in the MS-DOS archive repository from Microsoft’s GitHub account).

While the merge is in progress, it is important to avoid clashes with directory names on the file system and branch names in Git.

The $newMergeTarget and $TempFolderPrefix are used for the purpose of creating non-clashing versions of these. There is a clean up at the end of the script to rename temporary folders on the file system. The script does not automatically rename the target branch as this should be a manual process after the merge when ready to push to a new origin.

Define the Source Repositories and Merge Behaviour

The next stage in the script is to define all the existing repositories that you want to merge into a single repository. To keep the script agnostic in terms of PowerShell versions, I have used the pscustomobject type instead of using classes (supported from PowerShell 5 onwards).

In each entry, the following values should be set:

originRoot is usually left as an empty string to indicate that the root specified globally at the start of the script should be used. In the example, the last entry demonstrates pulling in a repo from a different origin.

repo is the repository within the origin. In the example I have three dummy repositories that I have created in my GitHub account that can be used as a trial run to get used to the script works before embarking on using your own repositories.

folder is the file system directory that the contents of the repository will be moved to once the migration is complete. This is used to ensure that there are no clashes between directories of the same name within different repositories. You are free to change how the overall hierarchy is structured once the migration is complete.

subDirectory is usually an empty string, but if you have several repositories that are you want to logically group together in the file system hierarchy, you can set folder to the same value, E.g. Archived and then use subDirectory to then define the target for each repo under that common area.

mergeBranch is the branch in the source repository that you want to merge into the common branch specified in $newMergeTarget. In most cases, this will be your ‘live’ branch with a name like main, master, develop or build. If left as an empty string, the repository will be included in the new mono-repo, but will effectively be orphaned into the archive branches and tags.

In my real-world case , the team had a few repositories that were created and had code committed, but the code never went anywhere, so not needed in the new main branch. However, we still want access to the contents for reference.

tempFolder is a belt-and-braces effort to ensure that there are no folder clashes if the new folder name in folder happens to exist in another repository while merging. The value here will be appended to the global $TempFolderPrefix with the intention of creating a unique directory name.

File System Clean Up

Before getting into the main process loop, the script does some cleaning up to ensure that previous runs of the script are deleted from the file system to ensure a clean run. You may want to change this if you want to compare results so that previous runs are archived by renaming the folder .

Once cleaned up, a new Git repository is created and an initial commit is created in the new branch. This is required so that Git merges can take place herein.

The Main Loop

With the array of source metadata created, we move into the main loop. I won’t go into a line by line breakdown here, but instead give an overview of the process.

The first thing to do for each repository is to set it as an origin and pull down all the branches to the file system. An important thing to note about the Git Pull is the –allow-unrelated-histories switch. Without this, Git will complain about no common histories to be able to merge.

As as aside, if your source repository is large, this may take some time. When developing the script, I thought something had gone wrong – it hadn’t – it was just slow.

With that done, we can then enter a loop of iterating through each branch and checking it out to its new branch name in the new repository (in effect, performing a logical move of the branch into an archive branch, but really this is just using branch naming conventions to create a logical hierarchy).

You may notice some pattern matching going on in this area of the script. The reason for this is that the Git branch -r command to list all the remote branches includes a line indicating where the orgin/HEAD is pointing. We do not need this as we are only interested in the actual branch names.Screen shot of Git output when listing remote branches

Once all the branches have been checked out and renamed, we return back to our common branch and remote the remote.

At this point, if we have specified a branch to merge into our common branch, the script will then

  • merge the specified branch, again using the –allow-unrelated-histories switch to let Git know that the merge has no common history to work with
  • create a temporary folder (as defined in the array of metadata) in the common branch
  • move the complete contents of the branch to that temporary folder

Care is needed in this last step once we have performed the first merge as the common folder will include previously merged repositories in their temporary folders. Therefore, to avoid these temporary folders being moved, we build up a list of the temporary folders we have created on each iteration and them to the exclude list that is fed into the Git mv command.

At this point, an error can creep in if the branch name specified in the item metadata does not exist in the source repository. When writing the script I received Git errors indicating there were no files to move and ended up with empty temporary folders littered around the new repository.

Again, you may choose to put some error handling in or, on the other hand, just correct the branch name and repeat the process from the start again.

Before moving to the next item in the metadata array, the script copies all the tags to the the logical folder of tags specified in $GitArchiveTags.

The Post Migration Clean Up

Once the migration has completed, there is a bit of tidying up to do.

If you remember, to avoid clashes between directories while the migration takes place, we used temporary directory names. We now need to do a sweep through to rename those temporary directory names to the intended destination names.

At this point, we are ready with the final mono-repo.

If you have run the script ‘as-is’ using my demo values, when you look on your file system, it should like like this

Screen shot of file system using the examples in the script

If you use a tool such as Atlassian SourceTree, you get a visual idea of what we have achieved with the merge process.

Screen shot of SourceTree view of the migrated repository using the examples in the script

Before Pushing to a Remote

With our migrated repository, we are now almost ready to push it up to a remote (be it GitHub, Azure DevOps, BitBucket et al).

However, at this point you may want to do some tidying up of renaming the __RepoMigration branch to main.

The repository is now in a state where you are ready to push it to a remote ‘as-is’. On the other hand, you may want to create an empty repository in the remote up front and then merge the migrated repository into it. If you do this, remember to use the # git pull –all –allow-unrelated-histories -v after adding the new remote.

At the end of the script, there is a commented out section that provides the commands I used to push up all the branches and tags created.

Alternatively, you may want to take manual control via the Git command line (or a GUI tool such as SourceTree).

Lessons Learnt

I have already mentioned earlier about problems with non-existent branches being specified, but there are other things to know.

My first piece of advice is to use the PowerShell Integrated Script Editor (ISE) to single step your way through the script using my dummy repositories to familiarise yourself with how the script works.

Once familiar, start with using one or two if your own repositories that are small and simple to migrate, to get a feel for how you want to merge branches into the new ‘main’ branch.

By single stepping, you will get instant feedback of errors occurring. As mentioned above, because Git writes to StdErr, it is hard to tease out the errors if running the script from start to finish.

Next, don’t automate pushing the results to your remote until you are happy that there is nothing missing and that the merges specified meet how you want to take the repository forward.

If you use a tool like SourceTree, don’t leave it running while the migration is taking place. Whilst it feels useful to graphically see what is happening while the script is running. it slows the process down and can in some cases cause the script to fail as files may become locked. Wait until the migration is complete and then open SourceTree to get a visual understanding of the changes made.

My last lesson is to have patience.

When I worked on this using real repositories, some of which had many years of histories, there are some heart-stopping moments when the repositories are being pulled down and it feels like something has gone wrong, but it hasn’t – it’s just Git doing its thing, albeit slowly!

Moving Forward

One of the downsides of mono-repos is size. In my real-world scenario that inspired this script and blog, the final migrated repo is 1.4GB in size. This is not massive compared to the likes of some well known mono-repos that are in the hundreds of gigabytes in size.

Once you have pushed the repository up to a remote, my advise is to clone the repo into a different local directory and only checkout the main branch (especially if you have a lot of orphaned archive branches that you don’t need to pull).

If disk size is still an issue, it is worth looking at the Git Virtual File System to limit the files that are pulled down to your local system



I hope that the two posts and the script are of help to people.

There is a lot of debate about the relative merits of poly-repo vs. mono-repo that I haven’t gone into. My view is to do what fits best and enables your team’ to work with minimal friction.

The reason for the migration that inspired this post was having difficulties in coordinating a release for a distributed monolith that was spread across several repositories. If you have many repos that have very little to do with one another (being true microservices or completely unrelated projects), there is probably no benefit to moving to a mono-repo.

In summary, to use a well worn cliché, “it depends”.

Merging Multiple Git Repositories Into A Mono-Repo with PowerShell (Part 1)

Following on from my last blog about the problems I had setting up Octopus Deploy with a service account, this is another DevOps related post that describes the approach I have taken to merging multiple Git repositories into a single Git repository (commonly known as a mono-repo).


To be clear, I am not going to provide a wide ranging discussion about the relative merits and disadvantages of using a mono-repo for source control vs. having one repository per project (poly-repo).

In the end it comes down to what works best for the team to manage the overall code base by reducing friction.

If you have lots of disparate projects that have no impact on each other’s existence or a true micro-service architecture where each service is managed within its own repository, there is very little point in bringing these into a mono-repo.

If on the other hand you have a distributed monolith where a feature requests or bug fixes may be spread across several repositories and require synchronisation when negotiating their way through the CI/CD pipeline (or worse having to jump through hoops to develop or test in concert while developing on your local machine), then a move to mono-repo may be of benefit.

There is no ‘one size fits all’ and you may end up with a hybrid of some projects occupying their own repositories, whilst others live in one big repository.


What prompted the need for a move to a mono-repo in my case was having to coordinate features within a distributed monolith where a feature request may span one, some or all of four key repositories and the only coordinating factor is a ticket number in the branch names used in each of the repositories.

This causes problems when having to context switch between multiple issues and making sure that

  • the correct branches are checked out in the repositories
  • configuration files are amended to point to appropriate local or remote instances of services
  • ensuring pull requests to branches monitored by TeamCity are coordinated as these also trigger Octopus Deploy to deploy to our common development environment
  • version numbers for different projects are understood and the inter-relationships are documented.

Now moving to a mono-repo is not going to solve all these problems, but it is the first step on the road.

(Moving To) A Whole New World

As described in my previous blog post, the team I am currently working with is in the process of completely rebuilding the CI/CD pipeline with the latest versions of Team City and Octopus Deploy.

This has provided the ideal opportunity to migrate from the current poly-repo structure to a new mono-repo. But how should we approach it?

Approaches to Consider

At it’s simplest, we could just take a copy of the current ‘live’ code and paste it into a new repository. The problem with this would be the loss of the ability to look at the (decade long) history in the context of the current repository. Instead, this would require hopping over to the existing repositories to view the history of files. We could live with, but is not ideal as it introduces friction of a different kind.

So, somehow, we need to try to migrate everything to one place, but this comes with complications.

In each of the current repositories, the source code is held at the root of each repo, so when trying to merge the repositories ‘as-is’, it introduces problems when trying to merge the contents of each of the existing repositories as it will cause no end of merge conflicts and muddy the code base. Therefore, the first thing we will need to do is to move the source code down from the root into dedicated (uniquely named) folders.

This could be done within the existing repositories before we think about merging repositories. However, this will mean having to revisit all the existing Team City projects to repoint the watched projects to the new folders. This also causes disruption to any current work that is in progress. So this approach should be ruled out.

There is also the problem of what to do with all the branches and tags in the old repositories. Ideally we want to also bring them along into the new repository, but we have a similar problem regarding trying to avoid naming conflicts (as I mentioned above, the branch names are the coordinating factor that we currently use, so these will be the same in each of the repositories where code has changed for a particular feature), so these will need renaming as well.

I’ll Tell You What I Want (What I Really, Really Want)

With all the above in mind, we need a migration plan that can accommodate the following requirements:

  • No changes required to the existing repositories
  • The full history needs to be migrated
  • All live branches need to be migrated
  • All tags need to be migrated
  • Avoid clashes between migrated repositories when brought into a single structure
  • Allow for a pre-migration to be run so that the new Team City can be set up without impacting the existing repositories and existing CI/CD pipeline
  • The process must be repeatable with minimum effort so that any problems can be identified and corrected, but also so that the new CI/CD pipeline can be built in preparation for a low-impact cut-over.

At first this seemed like a tall order, but ultimately what this boils down to is creating a new repository and then repeating the following steps for each legacy repository to be merged:

  • Pull each legacy repository the new repository
  • Rename the legacy branches and tags to they do not clash
  • Select a ‘live’ branch to merge into the main branch of the new repository and check it out
  • Move the content of the ‘live’ branch to a sub-folder that will not clash as other repositories are subsequently migrated
  • Merge the ‘live’ branch into the main branch

These steps can all be achieved by a combination of Git commands and file system commands which can be put together into a script.

In Part 2, I will show you how I created a PowerShell script to achieve the goal.

A Couple of Problems When Installing Octopus Deploy with a Domain Account

This post is primarily for my future self to document how to deal with the problems I had installing Octopus Deploy using a domain account as the service account.


I am currently working with a team that has old versions of TeamCity and Octopus Deploy and want to move to the latest versions. The upgrade path from these (very old) versions to the latest versions is complicated and therefore we have set up a new server to host the latest versions.

The (virtual) server we will be installing on is hosted in a managed environment (with no self-service facilities). This makes taking snapshots/checkpoints to rollback to if things go wrong  during installation a bit more complicated. It also means that we effectively only get one shot at getting things right. 

For these reasons, I created a small test lab at home to practice all the steps required so that I could create an installation step-by-step ‘run-book’ to do the real installation with as few hiccups as possible.

In my test lab I have the following set up as Hyper-V virtual machines:

  • A Windows 2016 server that is the Active Directory Domain Controller and hosts a SQL Server 2017 database server
  • A Windows 2016 server that doubles up as the TeamCity and Octopus Deploy server
  • A Windows 2012 server that will be a target server for deploying web sites to via an Octopus Deploy Tentacle.

This three server set-up does not truly represent a real-world environment (for example, the SQL Server would not normally reside on the domain controller), but is enough to emulate the core elements of the environment that I will be using in the real world.

The Challenge

The previous installation of Octopus Deploy used a local Windows system account, but for the new installation, we want to use a domain account

  1. to connect to SQL Server using Windows Authentication and 
  2. to have tight control over what the service can do vs the local system account which is tantamount to having full local administrator rights.

Preparing the Windows Environment

The details of the requirements for the domain service account are provided on the Octopus Deploy web site.

At time of writing, the details are a bit vague in places and require further information.

Based on this, I went through the following actions.

  • Create the Windows domain account with password locked down so it can be used as a service account

Screen shot of setting password rules on windows account to be used as a service account

Then on the server that Octopus Deploy will be installed on:

  • Create a folder called ‘Octopus’ on the D: drive for the Octopus Deploy application to reside in and grant Full Control to the Octopus service account created in AD
  • Create a folder called ‘OctopusData’ on the E: drive that will be used as the Octopus ‘home’ location and grant Full Control to the Octopus service account created in AD
  • Grant the Octopus service account created in AD the Log on as a Service permission. 

Preparing the Database

Details of the requirements for the SQL Server version and creating a database can be found on the Octopus Deploy website but in summary, if the SQL Server is not on the same server as Octopus, the following needs to be done before installation starts.

  • Ensure network connectively to the server over TCP/IP
  • Ensure no firewall rules preventing access over TCP/IP on appropriate ports between the server hosting SQL Server and the server hosting Octopus Deploy
  • Create an empty database in SQL Server 2017 called Octopus 
  • Set the collation of the database to case insensitive using
    ALTER DATABASE [Octopus] COLLATE Latin1_General_CI_AS
  • Create a SQL Server login for the Octopus service account created in AD
  • Create a user in the Octopus database for the Octopus service account login
  • Set the default schema for the user to [dbo] so the account has full control over the database.


Next Steps Before Installation

At this point there are some other things that I should have done, but did not find out until I had problems further down the line.

To view these next steps, scroll down to here.

Starting the Installation

Starting the installation is simple enough, with just one change from the default option, and that is to change the installation folder to D:\Octopus (where permissions have already been granted).

On clicking ‘Finish’ there is a brief pause while PowerShell is checked and then the ‘Getting Started’ Wizard is opened.


Using the Setup Wizard

After entering the licence key on the first page, the next step is to the change the ‘Home’ directory to the E: drive where there is plenty of disk space.

Clicking next to move to the next page, it is time to select the service account.

By default, this is set to the Local System account, but as described above, we want to use a custom domain account, so we change the radio button option and enter the domain credentials (using the Select User to find the service account created in Active Directory).

The next step is to point to the Octopus database that has been created in SQL Server 2017 and for which the service account has been granted DBO rights so that the database tables and other artefacts can be created. 

The next screen prompts for a port for the web portal. As the server that I am installing on has IIS, TeamCity and a couple of other self hosted web applications, it is not appropriate to use the default port 80. In the example below I am using port 93.

This is one of the areas where things started to go wrong later down the track!

Screen shot of the Octopus Deploy wizard page for selecting the Web Portal port to listen on

Moving on to the next page, this is where the administrator for Octopus Deploy is selected.

This is the next place that there are a few gotchas!

So the screen looks innocent enough. When you first arrive on the screen , it defaults the Authentication Mode drop down to using usernames and passwords in Octopus. However, to make life simpler, we want to use Active Directory. When this is selected, the username will default to Administrator.

DO NOT JUST ACCEPT THIS! This will be the local Administrator account for the machine you are installing on.

Instead, use the Select User link to bring up the standard Windows dialog for selecting a domain user. 

As I was installing in a test lab, I was a bit lazy and just selected the Domain Administrator account. DO NOT DO THIS EITHER!

The problem that you will find (after installation is complete), is that, by default, the built in Domain Administrator account does not have a User Principal Name (UPN) and without a UPN, Octopus cannot find the account in Active Directory.

When this happens, you get a “No fallback name was provided” error when trying to login.

Error screen on login

At that point, you are pretty much locked out of Octopus and either have to add a UPN to the Domain Administrator account or re-install Octopus from scratch to use another user as the Octopus administrator (though you may be able to change the authentication provider through the command line (I haven’t tried this! See https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/configure. )

As this can cause a lot of head-scratching, it would be nice if the installer could do the checks to verify whether the account selected will work at login stage before proceeding to the next stage.

Instead, depending on your team setup, you should select your own domain account from the dialog, or (if you have the right AD permissions) create a dedicated account for this. This second option adds complexity as you will need to log in to Windows using that account to make full use of Active Directory integration, or (if using Chrome for example), enter the credentials in using Forms Authentication or via a browser dialog box.

To get around all this I changed the account to my domain account with a view of adding other administrators later. 

Another feature that would be nice here is to be able to make an AD group the administrators rather than one specific user as it is all too easy to forget how this has been set up when that person leaves and their AD account is deactivated.

Clicking Next takes you to the final page to click Install and get ready to get up and running with Octopus Deploy.

Except … things did not go according to plan!

So Where Did It Go Wrong?

Looking at the both the Windows Event Log and the Octopus (text) logs , I found a couple of occurrences of this

Microsoft.AspNetCore.Server.HttpSys.HttpSysException (5): Access is denied.

In short, whilst I had prepared the basics of granting file system permissions to the service account, I had failed to grant permissions to access the HTTP traffic over the ports that Octopus needs to monitor.

I had made the mistake of assuming that the installer would take care of this, not having really understood the prerequisite notes on the web site.

This is where I found help on the Octopus website at  https://help.octopus.com/t/octopus-server-not-starting-httpsysexception-from-microsoft-aspnetcore-server-httpsys-urlgroup-registerprefix/25025/3.

In that post, it explains that the NETSH command is required to provide permissions to HTTP access on certain ports.

NETSH http add urlacl url=https://+:port/ user=domain\username listen=yes

An explanation of the syntax can be found at https://docs.microsoft.com/en-us/windows-server/networking/technologies/netsh/netsh-http#add-urlacl, however that is still a bit dry and didn’t really explain what the problem was.

A bit of searching lead me to this Stack Exchange answer https://superuser.com/questions/1272374/in-what-scenarios-will-i-use-netsh-http-add-urlacl that made things a bit clearer and in turn pointed to this Microsoft explanation https://docs.microsoft.com/en-us/windows/win32/http/namespace-reservations-registrations-and-routing?redirectedfrom=MSDN.

Fixing Octopus Deploy Not Starting Due to Access Denied Error

So, here’s the fix that is required to address the Access Denied error.

In short, you have to grant the permissions to the service account to listen for HTTP requests on certain ports.

To do this, open a Command Prompt on the Windows Server you will be installing / have just installed on using Run As Administrator and enter the following command for each of the ports required, substituting port_number with the actual port number required

netsh http add urlacl url=http://+:port_number/ user=domain\username listen=yes

In my case, as I am using port 93 instead of 80

netsh http add urlacl url=http://+:93/ user=MyDomain\Octopus listen=yes
netsh http add urlacl url=http://+:10943/ user=MyDomain\Octopus listen=yes

The entry for port 10943 is so that Octopus tentacles can poll the server. See https://octopus.com/network-topologies for details.


This blog is the result of an afternoon of re-applying Hyper-V snapshots and installing Octopus (it took seven attempts in all) until all the quirks I describe above had been ironed out.

I must admit, I am surprised that given that the installer knows the ports and the service account, that it can’t do this work for you. Same goes for checking the administrator account meets requirements such as having a UPN.

Perhaps it may be a UAC elevated rights problem as it seems to be a common thing to have to do (try searching for “NETSH http add urlacl” and you will find results for various software packages that have this manual step documented), though this is the first time I have had to address this problem manually.

Don’t get me wrong – I love Octopus Deploy, but as this was my first time installing it from scratch, having to go through these hoops to get the latest version installed and running as I need it did test my patience.

Simplifying Dependency Injection with Functions over Interfaces

In my last post, I showed how using a function delegate can be used to create named or keyed dependency injection resolutions.

When I was writing the demo code, it struck me that the object orientated code I was writing seemed to be bloated for what I was trying to achieve.

Now don’t get me wrong, I am a big believer in the SOLID principles, but when the interface has only a single function, having to create multiple class implementations seems overkill.

This was the ‘aha’ moment you sometimes get as a developer where reading and going to user groups or conferences plants seeds in your mind that make you think about problems differently. In other words, if the only tool you have in your tool-belt is a hammer, then every problem looks like a nail; but if you have access to other tools, then you can choose the right tool for the job.

Over the past few months, I have been looking more at functional programming. This was initially triggered by the excellent talk ‘Functional C#’ given by Simon Painter (a YouTube video from the Dot Net Sheffield user group is here, along with my own talk – plug, plug, here).

In the talk, Simon advocates using functional programming techniques in C#. At the end of the talk, he recommends the Functional Programming in C# book by Enrico Buonanno. It is in Chapter 7 of this book that the seed of this blog was planted.

In short, if your interface has only a single method that is a pure function, register a function delegate instead of an interface.

This has several benefits

  • There is a less code to write
  • The intent of what is being provided is not masked by having an interface getting in the way – it is just the function exposed via a delegate
  • There is no object creation of interface implementations, so less memory allocations and may be faster to initially execute (as there is no object creation involved)
  • Mocking is easier – you are just providing an implementation of a function signature without the cruft of having to hand craft an interface implementation or use a mocking framework to mock the interface for you.

So with this in mind, I revisited the demo from the last post and performed the following refactoring:

  • Replaced the delegate signature to perform the temperature conversion instead of returning an interface implementation that has a method
  • Moved the methods in the class implementations to static functions within the startup class (but could easily be a new static class)
  • Change the DI registration to return the results from the appropriate static function instead of using the DI container to find the correct implementation and forcing the caller to execute the implementation’s method.

As can be seen from the two code listings, Listing 1 (Functional) is a lot cleaner than Listing 2 (Object Orientated).

Listing 1

Listing 2

Having done this, it also got me thinking about how I approach other problems. An example of this is unit-testable timestamps.

Previously, when I needed to use a timestamp to indicate the current date and time, I would create an interface of ICurrentDateTime that would have a property for the current datetime. The implementation for production use would be a wrapper over the DateTime.Now property, but for unit testing purposes would be a mock with a fixed date and time to fulfill a test criteria.

Whilst not a pure function, the same approach used above can be applied to this requirement, by creating a delegate to return the current date and time and then registering the delegate to return the system’s DateTime.Now.

This achieves the same goal of decoupling code from the system implementation via an abstraction, but negates the need to create an unnecessary object and interface to simply bridge to the underlying system property.

If you are interested in looking at getting into functional programming while staying within the comfort zone of C#, I highly recommend Enrico’s book.

The demo of both the OO and Functional approaches can be found in the GitHub project at https://github.com/configureappio/NamedDiDemo.




Getting Dependencies by Name or Key using the .NET Core Container (Part 2)

If you have come here from a search engine, I suggest reading Part 1 first to get some background as to the problem that this post solves.

In Part 1, I gave the background as to why you may have a requirement to have a way of resolving instances from a DI container using a string name or some other key.

In this part, I propose a solution that leaves your code ignorant of any DI container and that is easily mockable for unit tests.

Making Use of Strongly Typed Delegates

A key thing to understand about DI containers is that the service type that is registered is exactly that – a type, not just interfaces and classes. Therefore, you can register delegates as well.

The idea of using delegates for resolution is not new.  There is an example in the Autofac documentation and this article on c-sharpcorner.

However, for .NET Core, when you search for something like ‘.NET Core named dependency injection’, you tend to only find the answer buried in StackOverflow answers.

That is why I have written this post, so that I can find the answer later myself, and also hopefully help other find it too.

Func<T> Delegates Vs. Strongly Typed Delegates

Before getting into a detail of using delegates as service types, we need to talk about generic Func delegates vs strongly typed delegates.

The generic Func type was introducted in C# 3.0 and is a generic delegate included in the System namespace that has multiple overloads that accept between zero and sixteen input parameters (denoted T1 to T16) and a single output parameter (TResult). These are convenience types so that as a developer you do not need to repeatedly create your own customer delegates.

For the purposes of retrieving an instance of a type from the DI container, the various Func delegates can be used as the type for service registration and as a parameter to the constructors of classes. E.g. Func<string, IServiceType>.

However, I have two reasons for not using this approach.

  1. If you change the signature of the service registration to a different overload of Func, you have no tools in the IDE to reflect that change throughout all the references (as the dependency resolution does not take place until runtime). Instead you will need to perform a text ‘search and replace’ task across your whole code base (and hope that there are no other uses of the generic signature for other purposes). By using a strongly typed delegate, this is a unique type and therefore, when changing the signature of the delegate, the compiler will indicate when references and instances are broken (or a tool like Reshaper will do the work for you)
  2. Though this is probably a very rare occurrence, you may have a need to have two different factories with the same signature. If the generic Func<T, IServiceType> is used, it will not be possible to differentiate the two factories. With strongly typed delegates, it is the delegate type that is registered and referenced in the constructor, so the same signature can be used with two different delegate types and the DI container will resolve tHem correctly.

How Does Using Typed Delegates Help?

By using typed delegates, the code that requires the named/keyed dependency is ignorant of the DI container technology that is resolving the instance for it.

This differs from other approaches where either the consuming class itself or an intercepting factory class takes the IServiceProvider as a dependency in its constructor and has logic to resolve the instance itself which defeats the purpose of dependency injection as it is creating a glue between the classes.

By registering the delegate within the DI container registration code, it keeps the codebase (outside of Di registration) ignorant of how the name/key resolution is working.

The Demo

For the rest of this post, I am going to use a test project (which can be found on GitHub at https://github.com/configureappio/NamedDiDemo) that converts temperatures to the Kelvin scale from four different scales – Centigrade, Fahrenheit, Rankine and Kelvins.

To do this, there are four formulas that are needed

Calculation Formula
Celcius(°C) to Kelvin(K) T(K) = T(°C) + 273.15
Fahrenheit(°F) to Kelvin(K) T(K) =  (T(°F) + 459.67) x 5 / 9
Rankine to Kelvin(K) T(K) = T(°R) × 5/9
Kelvin(K) to Kelvin(K) T(K) = T(K)

Each formula is implemented as a mapper class with an implementation of the IKelvinMapper interface as defined below.

Each of the implementations is registered with the DI container

For the caller to retrieve the correct conversion, we need a delegate that uses the input temperature scale in order to retrieve the correct mapper and return the mapper as the IKelvinMapper interface

Lastly, a lookup function is registered with the DI container against the delegate type as the service type to perform the lookup of the correct mapper. To keep things simple, the lookup is based on the first (capitalised) letter of the input temperature scale (normally as more robust lookup would be used, but I’ve tried to keep it simple so as not to distract from the core purpose of what I am demonstrating).

With the returned interface implementation, a conversation from the input temperature scale and value can be made to a consistent value in the Kelvin scale.

So How Does It Work?

The demo works on a very simple basis of leveraging the DI container to do all the hard work of determining which mapping converter is required. The caller just needs to know that it uses the delegate to take in the input temperature scale and value and return the value in Kelvins.

Object Instantiation

Depending on the complexity of the type to be instantiated, there are three ways to instantiate the object:

  1. Perform the instantiation with a ‘new’ keyword. This is fine if the constructor of the type has no dependencies and a new instance is required on each occasion. However, I would discourage this approach as it is not an efficient way of managing the lifetimes of the objects and could lead to problems with garbage collection due to long lived objects
  2. Rely on the DI container to instantiate the object. This is the approach taken in the demo. This has two benefits. The first is that if the object can be a singleton, then the DI container takes care of that for you provided the type has been registered as a singleton lifetime when registered. The second is that if the type has dependencies in the constructor, the DI container will resolve these for you
  3. Lastly, if one or more parameters to the type constructor need the be generated from the delegate input (and therefore not directly resolvable from prior DI registrations), but other parameters can be resolved, then the ActivatorUtilities CreateInstance method can be used to create an instance using a hybrid of provided parameters and using the DI container to resolve those parameters not provided.

This is where you take advantage of the ability to register a service resolution using a lambda expression as it provides access to the IServiceProvider container instance to retrieve instances from the service provider for (2) and (3) above.

Deciding Which Type to Instantiate

For simple keys, you can use a case statement inside a lambda expression to derive the type required and let the DI container perform the object instantiation for you. (This is the approach I have used in the demo project).

However, if you have many different items or complex keys, you may want to consider other ways of deriving the service type required.

For example, by registering a singleton Dictionary<string, serviceType> to map keys to types, if there are many keys, you may gain a performance boost from the hash lookup algorithm.

Instead of using the out-of-the-box generic dictionary thought, I recommend creating a dedicated class that implements IDictionary<string, Type> for the same reasons that I recommend using custom delegates over the generic Func<T> – it removes any ambiguity if there is (deliberately or inadvertently) more than one registration of the generic dictionary.

Care with Scoped Lifetimes

Careful thought needs to be given to the lifetime of both the factory delegate and the object being retrieved from the DI container.

Typically, as factory classes have no state,  they can usually be registered with a singleton lifetime. The same applies to the delegate being used as the factory or to access the factory class.

Where things get complicated is when the service type that the factory is instantiating for you has been registered in the DI container as having a scoped lifetime.

In this case, the resolution within the factory fails as the DI container cannot resolve scoped dependencies for constructor injection inside a singleton or transient lifetime object (in our case, the singleton factory delegate).

There are two ways to get around this problem.

(i) Register the factory delegate as scoped as well

(ii) Change the singleton factory delegate to take an IServiceProvider as part of the function signature. It is then the caller’s responsibility to pass the correctly scoped IServiceProvider to the delegate. However, this effectively takes us back to a service locator pattern again.

Accessing the Delegate for Calling

Now that we have abstracted the mapper instance generation into a delegate function, the client just needs to be able to resolve the instance.

For classes that want to use the mapper (and are also registered with the DI container), this is simply a case of making the delegate a constructor parameter.


Hopefully, these two posts have provided some insight into how the limitations of the Microsoft DI container to support named or keyed resolutions can be worked around without having to resort to using a third-party container or custom service locator pattern implementation.

Whilst not part of this two-part blog post, I have written another post about simplifying the demo to work in a more functional way by replacing the interface implementations with direct functions to (a) reduce the amount of code and (b) reduce the number of memory allocations required to create the objects.

Getting Dependencies by Name or Key using the .NET Core Container (Part 1)

If you want to go straight to the solution of how to resolve instances by a name or key without reading the background, go to Part 2 of this post.


Many dependency injection container solutions support the concept of labeling a registered dependency with a name or a key that can be used by the container to resolve the dependency required when presented by a dependent class.

For instance, Autofac supports both named and keyed labeling using a syntax like this to register the dependency:


It provides both an explicit method to resolve the dependency, and an attributed way on the constructor parameters of a class.

var r = container.ResolveNamed<IDeviceState>(“online”);public class DeviceInfo
public DeviceInfo([KeyFilter(“online”)] IDeviceState deviceState)

However, the container that is provided out of the box with .NET Core does not support named or keyed registrations.

Mix And Match Containers

The Microsoft DI Container is a ‘Conforming Container‘ that provides a common abstraction that can be wired up to other containers such as Autofac, Ninject, StructureMap et al.

As a result, it only supports the lowest common denominator across multiple DI solutions, and more accurately, only what Microsoft deemed as necessary for its purposes.

This means that if you want to use features such as named/keyed resolution, you end up having to mix and match both the Microsoft DI Container and your other container of choice.

Now, most of the big name containers support this and provide methods that register the custom container so it can be retrieved in constructor injection and the custom features accessed using that containers methods.

However, the downside to this is that

  1. you are losing the common abstraction as you are including code that is specific to a container technology in your code base
  2. if you inject the specific container into a constructor, you are inadvertently being led towards the Concrete Service Locator Anti-Pattern where the specific container  is effectively acting as the service locator for you
  3. you are binding your solution to a particular container technology which needs changing across the codebase if you decide to use a different container.

“I could write an adapter …”

The first and last of these issues can be mitigated by creating an interface for the functionality you require to abstract your code away from a particular container technology and then writing an adapter.

However, you are still have a service locator being injected into constructors in order to get to the abstracted functionality (in this case, obtaining an instance based on a name or key).

Why Would I Need to Get An Instance By Name or Key? – Surely it is an Anti-Pattern?

To an extent, I agree that using a name string or key to resolve an instance is effectively a service locator as it is forcing a lookup within the container to retrieve an instance of the required type.

Under normal circumstances, I would be looking at ways to avoid using this way of resolving an instance such as creating alias interfaces.

For example, say I have three different classes FooA, FooB & FooC that all implement an interface IFoo.

I have a class that has needs all three implementations to perform some functionality. I have two choices in how I can inject the classes.

I can (a) either explicitly define the three class types as parameters in the constructor of my class, or (b) I can register all three classes as IFoo and define a single parameter of IEnumerable<IFoo>.

The downside to (a) is that I am binding to the concrete implementations which may limit my unit testing if I am unable to access or mock the concrete instances in my unit test project. It also means that if I want to replace one of the classes with a different implementation I would need to change the constructor signature of my class.

The downside to (b) is that the class receives the implementation instances in the order that the container decides to present them (usually the order of registration) and the class code needs to know how to tell the implementations apart. This effectively is moving the service locator inside my class.

To get around this scenario, I could create three interfaces IFooA, IFooB and IFooC that each directly inherits from IFoo. The three class implementations would then be implementations of the appropriate interface and can be registered with the DI container as such.

The receiving class can then explicitly define IFooA, IFooB and IFooC as parameters to the constructor. This is effectively the same as we had before, but makes the class more testable as the interfaces can be easily mocked in unit testing instead of being bound to the concrete classes.

Alternatively, we can still leave the classes registered as implementations of IFoo and have a constructor that takes IEnumerable as its parameter. However,  this will mean that our class has to iterate over the enumeration to identify which classes are implementations of IFooA, IFooB & IFooC and then cast as appropriate. It then also has to know how to handle multiple registrations of the three derived interfaces. E.g. what happens if there are three IFooA implementations? Which should be used? The first? The last? Chain all of them?

With all this said, I recently had a situation where I needed to resolve instances from the container using a string as the key, which is what got me thinking about this in the first place.

Data Driven Instance Resolution

Say you have a data source that has child records in multiple different formats. The association is defined by an identifier and a string that indicates the type of data.

In order to correctly retrieve the data and then transform it into a common format suitable for export to another system, you are reliant upon that string to identify the correct mapper to use. In this situation, we need some way of being able to identify a mapper class instance from the identifying string in order to perform the correct mapping work required.

You could create a class mapping factory that maintains a dictionary of the acceptable strings that will be in the data and the type required for that string, then has a method that provides an instance of the type for a particular string. E.g.

public interface IFooFactory
IFoo GetFooByKey(string key);

The problem here is that your implementation of IFooFactory becomes a service locator as it will need the IServiceProvider passed to its constructor in order to be able to create the instance required.

Ideally, you want your code to be ignorant of IServiceProvider so that you do not have to mock it in your unit tests.

This is the point where you want your DI container to be able to be able to inject by name or key.

This is where I come to the whole point of this post.

Resolving an Instance by Name from the Microsoft DI Container using a Strongly Typed Delegate.

We want our classes to be completely ignorant of the DI container being used and just be able to dynamically obtain the instance.

This is where our old friend, the delegate keyword can help us, which I describe in more detail in Part 2.

Clean Architecture – Should I Move the Startup Class to Another Assembly?

I was recently listening to an episode of the brilliant .Net Rocks where Carl and Richard were talking to Steve Smith (a.k.a @ardalis) in which he talks about clean architecture in ASP.Net Core.

One of the things discussed was the separation of concerns, where Steve discusses creating an architecture in which you try to break up your application in such a way that hides implementation detail in one project from another consuming project. Instead, the consuming project is only aware of interfaces or abstract classes from shared libraries from which instances are are created by the dependency injection framework in use.

The aim is to try and guide a developer of the consuming project away from ‘new-ing’ up instances of a class from outside the project. To use Steve’s catch phrase, “new is glue”.

I was listening to the podcast on my commute to work and it got me thinking about the project I had just started working on. So much so, that I had to put the podcast on pause to give myself some thinking time for the second half of the commute.

What was causing the sparks to go off in my head was about how dependencies are registered in the Startup class in ASP.Net Core.

By default, when you create a new ASP.Net Core project, the Startup class is created as part of that project, and it is here that you register your dependencies. If your dependencies are in another project/assembly/Nuget package, it means that the references to wherever the dependency is has to be added to the consuming project.

Of course, if you do this, that means that the developer of the consuming project is free to ‘new up’ an instance of a dependency rather than rely on the DI container. The gist of Steve Smith’s comment in the podcast was do what you can to help try to prevent this.

When I got to work, I had a look at the code and pondered about whether the Startup class could be moved out to another project. That way the main ASP.Net project would only have a reference to the new project (we’ll call it the infrastructure project for simplicity) and not the myriad of other projects/Nugets. Simple huh? Yeah right!

So the first problem I hit was all the ASP/MVC plumbing that would be needed in the new project. When I copied the Startup class to the new project, Visual Studio started moaning about all the missing references.

Now when you create a new MVC/Web.API project with .Net Core, the VS template uses the Microsoft.AspNetCore.All meta NuGet package. For those not familiar with meta packages, these are NuGet packages that bundle up a number of other NuGet packages – and Microsoft.AspNetCore.All is massive. When I opened the nuspec file from the cache on my machine, there were 136 dependencies on other packages. For my infrastructure project, I was not going to need all of these. I was only interested in the ones required to support the interfaces, classes and extension methods I would need in the Startup class.

Oh boy, that was a big mistake. It was a case of adding all the dependencies I would actually need one by one to ensure I was not bringing any unnecessary packages along for the ride. Painful, but I did it.

So I made all the updates to the main MVC project required to use the Startup class from my new project and remove the references I previously had to other projects (domain, repository etc) as this was the point of the exercise.

It all compiled! Great. Pressed F5 to run and … hang on what?

404 when MVC controller not found
404 when MVC controller not found

After a bit of head scratching, I realised the problem was that MVC could not find the controller? WHY?

At this point, I parked my so-called ‘best practice’ changes as I did not want to waste valuable project time on a wild goose chase.

This was really bugging me, so outside of work, I started to do some more digging.

After reading some blogs and looking at the source code in GitHub, the penny dropped. ASP.Net MVC makes the assumption that the controllers are in the same assembly as the Startup class.

When the Startup class is registered with the host builder, it sets the ApplicationName property in HostingEnvironment instance to the name of the assembly where the Startup class is.

The ApplicationName property of the IHostingEnvironment instance is used by the AddMvc extension to register the assembly where controllers can be found.

Eventually, I found the workaround from David Fowler in an answer to an issue on GitHub. In short, you need to use the UseSetting extension method on the IWebHostBuilder instance to change the assembly used in the ApplicationName property to point to where the controllers are. In my case this was as follows:

UseSetting(WebHostDefaults.ApplicationKey, typeof(Program).GetTypeInfo().Assembly.FullName)

Therefore, without this line redirecting the application name to the correct assembly, if the controllers are not in the same assembly as the Startup class, that’s when things go wrong – as I found.

With this problem fixed, everything fell into place and started working correctly.

However, with this up and running, something did not feel right about it.

The solution I had created was fine if all the dependencies are accessible from the new Infra project, either directly within the project or by referencing other projects from the Infra project. But what if I have some dependencies in my MVC project I want to add to the DI container?

This is where my thought experiment broke down. As it stood, it would create a circular reference of the Infra project needing to know about classes in the main MVC project which in turn referenced the Infra project. I needed to go back to the drawing board and think about what I was trying to achieve.

I broke the goal into the following thoughts:

  1. The main MVC project should not have direct references to projects that provide services other than the Infra project. This is to try to prevent developers from creating instances of classes directly
  2. Without direct access to those projects, it is not possible for the DI container to register those classes either if the Startup is in the main MVC project
  3. Moving the Startup and DI container registration to the Infra project will potentially create circular references if classes in the MVC project need to be registered
  4. Moving the Startup class out of the main MVC project creates a need to change the ApplicationName in the IHostingEnvironment for the controllers to be found
  5. Moving the Startup class into the Infra project means that the Infra project has to have knowledge of MVC features such as routing etc. which it should not really need to know as MVC is the consumer.

By breaking down the goal, it hit me what is required!

To achieve the goal set out above, a hybrid of the two approaches is needed whereby the Startup and DI container registration remain in the main MVC project, but registration of classes that I don’t want to be directly accessed in the MVC project get registered in the Infra project so access in the MVC project is only through interfaces, serviced by the DI container.

To achieve this, all I needed to do was make the Infra project aware of DI registration through the IServiceCollection interface and extension methods, but create a method that has the IServiceCollection injected into it from the MVC project that is calling it.

Startup Separation

The first part of the process was to refactor the work I has done in the Startup class in the Infra project and create a public static method to do that work, taking the dependencies from outside.

The new ConfigureServices method takes an IServiceCollection instance for registering services from within the infrastructure project, and also an IMvcBuilder as well, so that any MVC related infrastructure tasks that I want to hide from the main MVC project (and not dependent on code in the MVC project) can also be registered.

In the example above, I add a custom validation filter (to ensure all post-back check if the ModelState is valid rather than this being done in each Action in the MVC controllers)  and add the FluentValidation framework for domain validation.

To make things a bit more interesting, I also added an extension method to use Autofac as the service provider instead of the out of the box Microsoft one.

With this in place, a took the Startup class out of the Infra project and put it back into the MVC project and then refactored it so that it would do the following in the ConfigureServices method:

  • Perform any local registrations, such as AddMvc and any classes that are in the MVC project
  • Call the static methods created in the Infra project to register classes that are hidden away from the MVC project and use Autofac as the service provider.

I ended up with a Startup class that looked like this:

The Full Example

My description of the problem and my solution above only really scratches the surface, but hopefully it is of use to someone. It is probably better to let the code speak for itself, and for this I have created a Git repo with three versions of an example project which show the three different approaches to the problem.

First is the out-of-the-box do everything in the main project

Then there is the refactoring to do all the registration in the Infra project

Lastly, there is the hybrid where the Startup is in the main project, but delegates registration of non-MVC classes to the Infra project.

The example projects cover other things I plan on blogging about, so are a bit bigger that just dealing with separating the Startup class.

The repo can be found at https://github.com/configureappio/SeparateStartup

For details of the projects, look at the Readme.md file in the root of the repo.


In answer to the question posed in the title of this post, my personal view is that the answer is – “No” … but I do think that extracting out a lot of plumbing away from the Startup into another assembly does make things cleaner and achieves the goal of steering developers away from creating instances of classes and instead, relying on using the DI container to do the work it is intended for. This then helps promote SOLID principles.

Hopefully, the discussion of the trials and tribulations I had in trying to completely move the Startup.cs class show how painful this can be and how a hybrid approach is more suitable.

The underlying principle of using a clean code approach is sound when approached the correct way, by thinking through the actual goal rather than concentrating on trying to fix or workaround the framework you are using.

The lessons I am taking away from my experiences above are:

  • I am a big fan of clean architecture, but sometimes it is hard to implement when the frameworks you are working with are trying to make life easy for everyone and make assumptions about your code-base.
  • It is very easy to tie yourself up in knots when you don’t know what the framework is doing under the bonnet.
  • If in doubt, go look at the source code of the framework, either through Git repos or by using the Source Stepping feature of Visual Studio.
  • Look at ‘what’ you are trying to achieve rather than starting with the ‘how’ – in the case above, the actual goal I was trying to achieve was to abstract the dependency registration out of Startup rather than jumping straight in with ‘move whole of the Startup.cs’.

Using IConfigureOptions To Apply Changes to a Configuration

In one of my previous posts, Hiding Secrets in appsettings.json – Using a Bridge in your ASP.Net Core Configuration (Part 4) I had a comment from Anderson asking if I had considered IConfigureOptions<TOptions> as a way of injecting dependencies into the TOptions class that had been bound to a configuration section.

I had not come across the interface, so with an example provided by Anderson, I started to look further into it. Andrew Lock has post on his blog that describes how an implementation works which was a starting point for me.


The IConfigureOptions<TOptions> interface has a single method (Configure) that takes an instance of TOptions where TOptions is a class that you will use for binding configuration values. If unfamiliar with this, please read my previous posts starting with Creating a Bridge to your ASP.Net Core Configuration from your Controller or Razor Page (Part 1).

In your implementation of IConfigureOptions<TOptions>, you are free to manipulate the options instance by changing property values. Your implementation can have a constructor that will take any dependencies you may need into order to set the properties. These constructor parameters will be resolved by the DI container. However, be aware of different scopes that may apply and how these are resolved. Read Andrew’s post for more info.

In the example below, I have:

  • a class MySettings which I will bind the configuration section to.
  • three implementations of IConfigureOptions<MySettings> which change the Setting1 string in different ways.
  • an interface IDateTimeResolver and class implementation to demonstrate the DI container injecting a resolved instance into the last of the IConfigureOptions<MySettings> implementations (and also serves as a  reminder that using DateTime.Now directly in your code is evil and should be wrapped in another class so it can be substituted when unit testing)

The interface/implementation mappings are registered with the DI container in the Startup class. Note the Confgure<MySettings>() and AddOptions() extension methods have been registered first.

What is interesting, is that multiple classes can implement the interface and, when registered with the DI container, be applied in the order they have been registered.

When I first saw this, I was confused as my understanding of DI registration was that the last registration is the one that is used to return the resolved instance by the DI container, so how could multiple registrations be applied? Of course, the penny dropped in that whatever is using these instances is not asking for the DI container to resolve a single instance, but is somehow iterating over the relevant registrations and resolving each registration in turn then calling the .Configure method.

A quick delve in Microsoft’s ASP.Net repository on GitHub, revealed the OptionsFactory class.

This pretty much confirmed the theory that the services collection is being iterated over, except you will notice that the constructor takes a parameter of IEnumerable<IConfigureOptions<TOptions>>. So what is resolving these multiple instances? 

Well it turns out to be the DI Container itself! I will go into more detail in a future post (as I don’t want to deviate from the main topic of this post)*, but in short, if your constructor takes an IEnumerable<T> as a parameter, then the DI container will use all registered mappings of T to resolve and inject multiple instances as an enumeration.

*Update – Since the post was written, Steve Gordon has written a great post that goes into more detail about the DI container and resolving multiple implementations  – see ASP.NET Core Dependency Injection – Registering Multiple Implementations of an Interface.

So in the OptionsFactory instance that will be created by ASP.Net using the DI container (where TOptions is MySettings, the class I have set up for configuration binding), the DI container will resolve the parameter of IEnumerable<IConfigureOptions<MySettings>> to our three registered implementations of IConfigureOptions<MySettings>. These will be injected and then the Create method in the factory will call the Configure method on each instance in order of registration in the Create method.

Therefore to demonstrate this, we will start with an appsettings.json that looks like this:

We can step through the three implementations as they are called by the factory Create method to see how the Settings1 value in MySettings is updated:

Example of how the Settings1 value changes as each configuration option is applied
Example of how the Settings1 value changes as each configuration option is applied

IConfigureNamedOptions<TOption> and IPostConfigureOptions<TOptions> 

The eagle eyed will have spotted that in the OptionsFactory, there is a check on each instance of IConfigureOption<TOption> to see if it is an implementation of IConfigureNamedOptions<TOption>.

This is very similar to IConfigureOption<TOption>, but the signature of the Configure method has an extra string parameter called name to point to a named configuration instance.

Once each of the registered implementations of IConfigureOption<TOption> has been processed, the factory then looks for implementations of the IPostConfigureOptions<TOptions> interface.

I plan on covering these two interfaces in a future post as I don’t want to make this post too long.


At first glance, this approach is similar to the bridge pattern approach I have described in my earlier posts in that the responsibility for changing the state values is delegated to an outside class.

However it is not a bridge as the properties of the underlying configuration instance are being changed. A bridge will intercept requests and then change the return value, leaving the underlying configuration instance unchanged.

When using the IConfigureOptions<TOptions> approach, the TOptions instance that is passed in through the parameter in the Configure method is being mutated. The changes will live on in the instance.

My personal opinion is that this not as clean as the bridge approach (but I guess I am biased).

I prefer to try to avoid potentially unexpected mutation of objects and instead return either a new instance of the object with property values copied from the existing into the new one or a substitute (e.g. the bridge). In other words, a functional approach where the input object is treated as immutable.

This leaves the original instance of the class in its initialised state which may be required by other functionality in the application that does not want the updated values or can be used by a different bridge class.

An IStartup Gotcha – The Curious Incident of the Missing Log Provider

In this post I look at a problem I had with injecting values into the constructor of a class that implemented IStartup and why doing this does not work as expected.


When learning a new technology, I like to get to know the inner workings of it to try and understand how my code will work and develop a set of my own best practices.

Two areas of ASP.Net Core that I have a lot of interest in are the startup pipeline and configuration. The latter of these has been the focus of my previous posts about how to set up configuration using a SOLID approach.

For the former, I have been reading several blog posts, the most illuminating of which has been Steve Gordon’s blog which I highly recommend a read of. (Full disclosure – Steve runs the local .Net South East user group that I regularly attend).

In particular, I found his post on the Startup process illuminating as it reveals a whole load of reflection work being done by the ConventionBasedStartup class when using a convention based Startup class.

Now whilst I appreciate that Microsoft wants to make the Startup class as simple as possible to implement out-of-the-box in a new project, I am not a massive fan of this approach once you know what you are doing. After reading Steve’s blog post, it seemed logical to refactor my Startup classes to implement the IStartup interface as this is the first thing that is checked when instantiating the class (as shown in Steve G’s gist here)

On reading through how the ConventionBasedStartup class constructs an implementation of IStartup, I came to the following conclusions:

  • There is a lot of reflection going on that, if IStartup was used, would not be required
  • It is not statically typed, so if signatures are incorrect in some way such as a typo, it will fail at runtime but not be caught at compile time
  • The ability to have multiple versions of the same method for different environments in the same class differentiated by name E.g. ConfigureServicesDevelopment, ConfigureServicesStaging and relying on the ASPNETCORE_ENVIRONMENT environmental variable feels clunky and doesn’t feel like a SOLID approach.

I can see why Microsoft has taken this approach as it is easy to provide boilerplate code that is easy to extend, but for me, the convention approach does not feel right for the Startup process (though I reserve the right to be hypocritical in that I have been using the convention based approach for years in letting ASP.Net MVC work out which controller is being used in routing, but hey, I’m not perfect! ).

Therefore, in my ASP.Net Core projects, I started to refactor the Startup classes to implement the IStartup interface.

This was the approach I took on a project I was working on for a client. I had recently added NLog and this was working nicely using the convention based Startup, but in an effort to clean up the code, I wanted to refactor to IStartup. However, for some strange reason, the NLog logger stopped working after refactoring.

Time to get the deerstalker and magnifying glass out.

How Using IStartup Differs from Convention Based Startup

At first glance, the Startup classes used in both approaches appear to be the same, but there is one significant difference.

In the convention based approach, you are free to add as many parameters as you want to the Configure method, safe in the knowledge that as long as you (or rather in most cases, the WebHost.CreateDefaultBuilder) have registered the interfaces or classes you require with the Dependency Injection services, then they will be injected into the method.

This is thanks to the ‘magic’ inside the ConventionBasedStartup. When implementing IStartup yourself, the signatures are set in the contract

Of particular note is the signature for the Configure method which only accepts a single parameter of type IApplicationBuilder.

Therefore, if you want to inject other types, you can’t just add them to the method parameters, so you need to find another way.

Adding ILoggerFactory to the Startup Constructor – Don’t Do This At Home

TL;DR You can skip this bit if you want as it describes how I got to the bottom of the problem. If you just want to know how to correctly access the ILoggerFactory, go to here

When writing code that implements an interface, if I need extra class instances that are not provided for in the method signature, my usual approach is to make an instance variable that is assigned from a constructor parameter.

So when I no longer had access to ILoggerFactory in the Configure method, I added it to the Startup constructor and created an instance variable that I could then accessed from inside the Configure method. So far, so normal. I did the NLog configuration in the Configure method and just expected things to work as they had before.

Alas, twas not to be …

At first I thought that I must have messed up something in the NLog configuration, but couldn’t see anything different. So I stepped through the code. I looked at the ILoggerFactory that had been injected into the constructor of the Startup.

There were two providers present, Console and Debug. I then ran through the code to where NLog had been added, and looked at the providers collection again. Yup, three providers with NLog being the last. All good.

I then ran though to the constructor of the controller where I was doing some logging where I had declared a parameter of ILogger. I looked at the Loggers collection on the logger and there were four loggers, but no sign of the NLog logger.

Somewhere along the pipeline, the ILoggerFactory had lost NLog and added two loggers, both of which were the ApplicationInsights logger.

For the client’s project I reverted back to the convention based approach so as not to hold up the development, but my curiosity got the better of me and decided to dig a bit more.

Compare and Contrast

At home, I created a simple project where I created two copies of the Startup class. One using the convention approach where ILoggerFactory is injected into the Configure method and one where I have the ILoggerFactory injected into the constructor of the Startup class and saved in an instance variable. I then used a compilation symbol to run with one or the other.

I created a dummy class that implemented ILoggerProvider rather than muddy the waters by using NLog and having to worry about its configuration. I then implemented the constructor based code as I had previously done, as shown below:

I ran through both scenarios using the Visual Studio debugger and got the same result.

  • When using convention based Startup class, there were five loggers, Debug, Console, 2 x Application Insights and my DummyProvider.
  • When using the IStartup implementation, there were just the four loggers – DummyProvider was missing.

So somehow, the logging factory instance in my Startup class is not the one that makes it to the controller.

The Correct Way to Access the Logger Factory

OK, so the constructor approach does not work. It then dawned on me – is there another way to get to the ILoggerFactory instance through the IApplicationBuilder that is provided via the IStartup.Configure(IApplicationBuilder app) method?

In short, yes! The app.ApplicationServices provides access the the IServiceProvider container where we can resolve services.

I ran the test again, and there was the DummyLogProvider.

The Logger Is There!`

Problem solved.!