Debugging C# Source Generators with Visual Studio 2019 16.10

Background

I’m a big fan of source generators that were added in C#9, but debugging has been a bit of a pain so far, involving forcing breakpoints in code and attaching the debugger during compilation.

With the RTM release of 16.10 Visual Studio 2019, things now get a bit easier with the introduction the new IsRoslynComponent element in the csproj file and a new debugger launch option of Roslyn Component.

At time of writing, this has not had much publicity, other than a short paragraph in the release notes and a short demo in an episode of Visual Studio Toolbox.

Therefore, to give some visibility to this new functionality, I have written this short step-by-step guide.

If you are new to source generators, there are lots of great resources out there, but a good starting point is this video from the ON.NET show.

Before Starting

In order to take advantage of the new debugging features in VS2019, you will need to make sure that you have the .NET Compiler Platform SDK component installed as part of Visual Studio.

This is shown as one of the components installed with the Visual Studio extension development workload.

Visual Studio Installer workload

Step By Step Guide

For this guide, I am assuming that you already have a source generator that you have written, but if not, I have put the source code for this guide on GitHub if you want to download a working example.

Step 1 – Add the IsRoslynComponent Property to Your Source Generator Project

The first step in making use of the new debugging functionality is to add a new entry to your project.

The property to add is IsRoslynComponent and the value should be set to true

You will need to do this in the text editor as there is no user interface to add this.

Screen shot of setting the IsRoslynComponent property in a project file

Whist in the project file, make sure that you have the latest versions of the core NuGet packages that you require / are useful to work with source generators.

The screen shot below shows the versions that are active at time of writing this post Source Generator Package Versions screen shot from project file

Step 2 – Add a Reference to Your Source Generator Project to a Consuming Project

For the purpose of this guide, I am using a project reference to consume the source generator.

When referencing a source generator, there are two additional attributes that need to be set to flag that this is an analyser that does not need to be distributed with the final assembly. These are

  • OutputItemType = “Analyzer” 
  • ReferenceOutputAssembly=”false”

Referencing the Source Generator project

Step 3 – Build All Projects and View Source Code

To keep this guide short, I am assuming that both the source generator project and the consuming project have no errors and will build.

Assuming that this is the case, there are two new features in VS2019 for source generators that are useful for viewing the generated code.

The first is in the Dependencies node of the consuming project where you will now find an entry in the Analyzers node of the tree for your source generator project. Under this, you can now see the generated files (Item 1 in the screen shot below)

Screen shot of VS2019 Source Generated Code in Solution Explorer view

There are two other things to notice in this screen shot.

Item 2 is the warning that this is a code generated file and cannot be edited (the code editor will not allow you to type into the window even if you try).

Item 3 highlights the editor showing where the file is located. Note that by default, this is in your local temp directory as specified by the environmental variable.

Now that the source code can be viewed in the code window, you can set a breakpoint in the generated code as you would with your own code.

The second new feature is that the code window treats the file as a regular code file, so you can do things like ‘Find All References’ from within the generated file. Conversely, you can use ‘Goto Definition’ from any usages of the generated members to get to the source generated file in the editor.

Step 4 – Prepare to Debug The Source Generator

This is where the ‘magic’ in VS2019 now kicks in!

First, in Solution Explorer, navigate to the source generator project and navigate to the properties dialog, then into the Debug tab.Screen shot of navigating to the Properties dialog from Solution ExplorerScreen shot of setting Roslyn Component as the launch setting for debugging a source generator project

At this point, when you open the Launch drop down, you will notice at the top there is now a Roslyn Component option which has now been enabled thanks to the IsRoslynComponent entry in your project file.

Having clicked on this, you can set which project you want to use to trigger the source generator (in our case, the consuming project) in the drop down in the panel. Then set use the context menu on the the source generator project to set it to be the start up project to use when you hit F5 to start debugging.Screen shot of setting Roslyn Component as the launch setting for debugging a source generator project

Step 5 – Start Debugging

In your source generator project, find an appropriate line to place a breakpoint in your code, inside either the Inititialize or Execute methods of your generator class.

You are now ready to hit F5 to start debugging!

What you will notice is that a console window pops up with the C# compiler being started. This is because you are now debugging within stage of the compiler that is using your code to generate the source code to be used elsewhere in the build.

Screen shot of debugging as source generator with compiler opened in console

Now, you can step through your source generator code as you would with any other code in the IDE.

Step 6 – Changing Code Requires a Restart of Visual Studio

If as part of the debugging process you find you need to change your code. When you recompile and start debugging again, you may find that your changes have not taken effect from within your Visual Studio session.

This is due to the way that Visual Studio caches analyzers and source code generators. In the 16.10 release, the flushing of the cache has not yet been fully addressed and therefore, you will still need to restart Visual Studio to see your code changes within the generator take effect.

If you find that you need to make changes iteratively to debug a problem, you may want to go back to including conditional statements in your code to launch the debugger and use the dotnet build command line to get the compiler to trigger the source code generation in order to debug the problem.

If you do need to do this, I take the following approach to avoid debugger code slipping into the release build.

(1) In the Configuration Manager, create a new build configuration based on the Debug configuration called DebugGenerator

(2) In the project’s build properties, create a conditional compilation symbol called DEBUGGENERATOR

(3) In the Initialize method of the source generator to be debugged, add the following code

(4) Instead of using Visual Studio to trigger a build, open a command line instance and use the following to start the build

dotnet build -c DebugGenerator --no-incremental

This will force the build to use the configuration that will trigger the debugger to launch. Usually when the compiler reaches the line to launch the debugger, you will be prompted to select an instance of Visual Studio. At this point, select the instance that you have just been editing.

Image of Just-in-Time Debugger Selector dialogAfter a short period of loading symbols, Visual Studio will break on the Debugger.Launch line. You can now set a breakpoint anywhere in your source generator project (if not already set) and use F5 to run to that line.

Note, I have used the –no-incremental switch to force a rebuild so that the debugger launch is triggered even if the code has not changed.

A Gotcha!

When I started playing with this new functionality, I loaded up an existing source generator that had been written by a colleague and found that the option to select Roslyn Component was not available, but worked when I created a new source generator project

After a few hours of trial and error by editing the project file, I found that the existing source generator had a reference to the Microsoft.Net.Compilers.Toolset NuGet package. Taking this out and restarting Visual Studio triggered the new functionality of VS to kick in.

If you look at the description of the package, it becomes clear where the problem arises. In short, it comes with its own set of compilers instead of using the default compiler. The current ‘live’ version is 3.9.0 which does not appear to support the IsRoslynComponent. The version required to work is still in pre-release – 3.10.0-3.final.

If you hit this snag, it is worth investigating why the package has been used and whether it can be removed given that

  • It is not intended for general consumption
  • It is only intended as a short-term fix when the out-of-the-box compiler is crashing and awaiting a fix from Microsoft

More details on why it should not be used can be found in this Stack Overflow answer from Jared Parsons.

Conclusion

Whilst not perfect due to the caching problem, the 16.10 release of Visual Studio has added some rich new functionality to help with writing and debugging source generators.

Using OpenApiReference To Generate Open API Client Code

This is not one of my usual blogs, but an aide-mémoire for myself that may be of use to other people who are using the OpenApiReference tooling in their C# projects to generate C# client code for HTTP APIs from Swagger/OpenApi definitions.

Background

I have been using Rico Suter’s brilliant NSwag for some time now to generate client code for working with HTTP API endpoints. Initially I was using the NSwag Studio application to create the C# code and placing the output into my project, but then I later found I could add the code generation into build process using the NSwag MSBuild task.

Recently though, I watched the ASP.NET Community Standup with Jon Galloway and Brady Gaster. In the last half hour or so, they discuss the Connected Services functionality in Visual Studio 2019 that sets up code generation of HTTP API endpoint clients for you.

This feature had passed me by and watching the video got my curiosity going as to whether the build chain that I have been using for the last couple of years could be simplified. Especially given that, behind the scenes, it is using NSwag to do the code generation.

Using Connected Services

Having watched the video above, I recommend reading Jon Galloway’s post Generating HTTP API clients using Visual Studio Connected Services As that post covers the introduction to using Connected Services, I won’t repeat the basics again here.

It is also worth reading the other blog posts in the series written by Brady Gaster:

One of the new things I learnt from the video and blog posts is to make sure that your OpenApi definitions in the source API include an OperationId (which you can set by overloads of  the HttpGet, HttpPost (etc) attributes on your Action method) to help the code generator assign a ‘sensible’ names to the calling method in the code generated client.

Purpose of This Post

Having started with using the Visual Studio dialogs to set up the Connected Service, the default options may not necessarily match with how you want the generated code to work in your particular project.

Having had a few years’ experience of using NSwag to generate code, I started to dig deeper into how to get the full customisation I have been used to from using the “full” NSwag experience but within the more friendly build experience of using the OpenApiReference project element.

One Gotcha To Be Aware Of!

If you use the Connected Services dialog in Visual Studio to create the connected service, you will hit a problem if you have used a Directory.Packages.props file to manage your NuGet packages centrally across your solution. The Connnected Services wizard (as at time of writing) tries to specific versions of NuGet packages.

This is part of a wider problem in Visual Studio (as at time of writing) where the NuGet Package Manager interaction clashes with the restrictions applied in Directory.Packages.props. However, this may be addressed in future versions as of the NuGet tooling and Visual Studio per this Wiki post.

If you are not familiar with using Directory.Packages.props, have a look at this blog post from Stuart Lang

Manually Updating the OpenApiReference Entry in your Project

There isn’t much documentation around on how to make adjustments to the OpenApiReference element that gets added to your csproj file, so hopefully this post will fill in some of the gaps until documentation is added to the Microsoft Docs site.

I have based this part of the post on looking through the source code at https://github.com/dotnet/aspnetcore/tree/main/src/Tools/Extensions.ApiDescription.Client and therefore some of my conclusions may be wrong, so proceed with caution if making changes to your project.

The main source of information is the Microsoft.Extensions.ApiDescription.Client.props file which defines the XML schema and includes comments that I have used here.

The OpenApiReference and OpenApiProjectReference Elements

These two elements can be added one or multiple times within an ItemGroup in your csproj file.

The main focus of this section is the OpenApiReference element that adds code generation to the current project for a specific OpenApi JSON definition.

The OpenApiProjectReference allows external project references to be added as well. More on this below,

The following attributes and sub-elements are the main areas of interest within the OpenApiReference element.

The props file makes references to other properties that live outside of the element that you can override within your csproj file.

As I haven’t used the TypeScript generator, I have focussed my commentary on the NSwagCSharp code generator.

Include Attribute (Required)

The contents of the Include attribute will depend on which element you are in.

For OpenApiReference this will be the path to the OpenApi/Swagger json file that will be the source the code generator will use.

For OpenApiProjectReference this will be the path to another project that is being referenced.

ClassName Element (Optional)

This is the name to give the class that will be generated. If not specified, the class will default to the name given in the OutputPath parameter (see below)

CodeGenerator Element (Required)

The default value is ‘NSwagCSharp’. This points to the NSwag C# client generator, more details of which below.

At time of writing, only C# and TypeScript are supported, and the value here must end with either “CSharp” or “TypeScript”. Builds will invoke a MSBuild target named “Generate(CodeGenerator)” to do actual code generation. More on this below.

Namespace Element (Optional)

This is the namespace to assign the generated class to. If not specified, the RootNamespace entry from your project will be used to put the class within your project’s namespace. You may choose to be more specific with the NSwag specific commands below.

Options Element (Optional)

These are the customisation instructions that will be passed to the code generator as command line options. See Customising The Generated Code with NSwag Commands below for details about usage with the NSwagCSharp generator

One of the problems I have been having with this element is that the contents are passed to the command line of the NSwagCSharp as-is and therefore you cannot include line breaks to make it more readable.

It would be nice if there was a new element that allows each command option to be listed as an XML sub-element in its own right that the MSBuild target concatenates and parses into the single command line to make editing the csproj file a bit easier.

Possible Options Declaration

OutputPath (Optional)

This is the path to place generated code into. It is up to the code generator as to whether to interpret the path as a filename or as a directory.

The Default filename or folder name is %(Filename)Client.[cs|ts].

Filenames and relative paths (if explicitly set) are combined with
$(OpenApiCodeDirectory). Final value is likely to be a path relative to
the client project.

GlobalPropertiesToRemove (OpenApiProjectReference Only – Optional)

This is a semicolon-separated list of global properties to remove in a ProjectReference item created for the OpenApiProjectReference. These properties, along with Configuration, Platform, RuntimeIdentifier and
TargetFrameworks, are also removed when invoking the ‘OpenApiGetDocuments’ target in the referenced project.

Other Property Elements

In the section above, there are references to other properties that get set within the props file.

The properties can be overridden within your csproj file, so for completeness, I have added some commentary here

OpenApiGenerateCodeOptions

The Options element above if not populated defaults to the contents of this element, which in of itself is empty by default.

As per my comment above for Options, this suffers the same problem of all command values needing to be on the same single line.

OpenApiGenerateCodeOnBuild

If this is set to ‘true’ (the default), code is generated for the OpenApiReference element and any OpenApiProjectReference items before the BeforeCompile target is invoked.

However, it may be that you do not want the generated called on every single build as you may have set up a CI pipeline where the target is explicitly invoked (via a command line or as a build target) as a build step before the main build. In that case, the value can be set to ‘false’

OpenApiGenerateCodeAtDesignTime

Similar to OpenApiGenerateCodeOnBuild above, but this time determines whether to generate code at design time as well as being part of a full build. This is set to true by default.

OpenApiBuildReferencedProjects

If set to ‘true’ (the default), any projects referenced in an ItemGroup containing one or many OpenApiProjectReference elements will get built before retrieving that project’s OpenAPI documents list (or generating code).

If set to ‘false’, you need to ensure the referenced projects are built before the current project within the solution or through other means (such as a build pipeline) but IDEs such as Visual Studio and Rider may get confused about the project dependency graph in this case.

OpenApiCodeDirectory

This is the default folder to place the generated code into. The value is interpreted relative to the project folder, unless already an absolute path. This forms part of the default OutputPath within the OpenApiReference above and the OpenApiProjectReference items.

The default value for this is BaseIntermediateOutputPath which is set elsewhere in your csproj file or is implicitly set by the SDK.

Customising The Generated Code with NSwag Commands

Here we get to the main reason I have written this post.

There is a huge amount of customisation that you can do to craft the generated code into a shape that suits you.

The easiest way to get an understanding of the levels of customisation is to use NSwag Studio to pay around with the various customisation options and see how the options affect the generated code.

Previously when I have been using the NSwag MSBuild task, I have pointed the task to an NSwag configuration json file saved from the NSwag Studio and let the build process get on with the job of doing the code generation as a pre-build task.

However, the OpenApiReference task adds a layer of abstraction that means that just using the NSwag configuration file is not an option. Instead, you need to pass the configuration as command line parameters via the <Options> element.

This can get a bit hairy for a couple of reasons.

  • Firstly, each command has to be added to one single line which can make your csproj file a bit unwieldy to view if you have a whole load of customisations that you want to make (scroll right, scroll right, scroll right again!)
  • Secondly, you need to know all the NSwag commands and the associated syntax to pass these to the Options element.

Options Syntax

Each option that you want to pass takes the form of a command line parameter which

  • starts with a forward slash
  • followed by the command
  • then a colon and then
  • the value to pass to the command

So, something like this: /ClientBaseClass:ClientBase

The format of the value depends on the value type of the command of which there are three common ones

  • boolean values are set with true or false. E.g. /GenerateOptionalParameters:true
  • string values are set with the string value as-is. E.g. /ClassStyle:Poco
  • string arrays are comma delimited lists of string values. E.g.
    /AdditionalNamespaceUsages:MyNamespace1,MyNamespace2,MyNamespace3

The following table is a GitHub gist copy from the GitHub repository I have set up for this and which I plan to update over time as I get a better understanding of each command and its effect on the generated code.

At time of writing, many of the descriptions have been lifted from the XML comments in the code from the NSwag repository on GitHub.

(Apologies the format of the imported markdown here is not great. I hope to make this a bit better later when I can find the time. You may want to go direct to the gist directly)

Conclusion

The new tooling makes the code generation build process itself a lot simpler, but there are a few hoops to jump through to customise the code generated.

I’ve been very impressed with the tooling and I look forward to seeing how to it progresses in the future.

I hope that this blog is of help to anyone else who wants to understand more about the customisation of the OpenApiReference tooling and plugs a gap in the lack of documentation currently available.

Introducing Strongly Typed HTTP Request Headers for ASP.NET Core

In this first part of a series of occasional posts, I discuss the thinking behind taking  string based HTTP Headers and presenting them to your .NET code via dependency injection as strongly typed objects.

Background

If you have read my previous blog posts or seen my talks, you will be aware that I am a big fan of the configuration binding functionality in .NET Core (now .NET 5) that takes string key/value pairs stored in various ways (such as JSON or environmental variables) and binding them to a typed object that is exposed via the .NET Dependency Injection container.

Recently, I have been giving some thought to other areas in ASP.NET Core that would benefit from this ability to bind string data into object instances, in particular looking at HTTP request headers.

The IHeaderDictionary Interface

In ASP.NET Core, HTTP request headers are presented as an instance of the IHeaderDictionary interface which is primarily a (dictionary) collection of key/value pairs made up of a string value for the key (the header key) and the header values within a StringValues struct (which itself is a special collection of strings that “represents zero/null, one, or many strings in an efficient way“).

The purpose of the interface is to present a collection of HTTP headers in a consistent manner. HTTP headers can have multiple declarations within the incoming request (see the RFC specification) which need to be parsed and grouped together by key before being presented in the collection.

Whilst the interface provides a consistent manner to access the headers, if you want interrogate the incoming request headers, you have a few hoops to jump through, namely

In other words you need code to get to HttpContext.Request.Headers.

If you have several headers that you want to access, you have to repeat this process for each of the headers you require.

Parsing and Validating Header Values

Once you have your value(s) for a header, these are still strings that you may want to convert into some other type such as integer or GUID, which then means having to parse the value(s) which then raises a number of other questions:

  • What should be the default if the header is not present?
  • What to do if the header is present, but the value is not in the correct format to parse to the required type?
  • What to do if only expecting a single value, but multiple values are presented – first or last in wins?
  • What to do if expecting multiple values and a single value is presented
  • What to do if the value(s) can be parsed, but fail any domain validation that needs to be applied (E.g. integer value must be within a domain range)
  • If any of the above cannot be worked around, how can an error be safely marshalled back to the caller without raising an exception?

In each of these scenarios, there is the potential for an exception to be thrown if assumptions are made about the incoming data and guards are not put in place to handle non-expected scenarios (E.g. using TryParse instead of Parse when converting strings to other primitive types to avoid exceptions when parsing fails).

Assuming all of the above is working correctly and you have managed to get the value(s) you require out of the headers, there is the question of passing the data from the controller action (or middleware) to some domain model or function.

Ideally, your domain logic and entities should avoid any bindings to web related types such as IHeaderDictionary & HttpContext. Avoiding this coupling to web concepts means they can be used in other applications (console, desktop or mobile applications) where the values that are received via the headers in a web application/API service may (or may not) be obtained in other ways (user input, configuration or hard coded).

Primitive Obsession

I recently read Andrew Lock’s blog about primitive obsession where he discusses the danger of passing values around as standard data type (string, integer, GUID et al) as these are interchangeable values that do not have context.

The post goes on to put forward an implementation of wrapping these values into ‘strong types’ that give context to a value. E.g. a customer id value expressed as a GUID is interchangeable with a product id that is also a GUID, but an instance of a CustomerIdentity type is not interchangeable with a ProductIdentity.

After having read Andrew’s series that shows how to create strong types that are struct based, I then went to to read Thomas Levesque’s series that is based on Andrew’s series, but this time implementing using C# 9 record types in .NET 5.

I highly recommend reading both of these.

The principle I want to carry through is that each HTTP header of interest to the application should be mapped into a strongly typed instance to give the value(s) some meaning and context beyond just being an incoming value. In addition, these types should abstract away the need for callers to have any knowledge of HTTP related classes and interfaces.

Replacing Commonly Written Code with a Library

With all of the above in mind, I have started a library to remove the need to keep writing a lot of the commonly required code. The requirements I have are

  • To remove the need for any of my controller or domain code to need to have knowledge of HTTP headers to retrieve the values that have been supplied in a request. Instead I want strongly typed values to be available from the Dependency Injection container that encapsulate one or more pieces of data that have some meaning to the application
  • To have a standard generic factory process to create the strong types using a standard signature that provides a collection of zero, one or many strings for a particular header name to be provided by the factory
  • For the factory process to have a simple fluent syntax that can be used within the container registration process
  • For the factory to be able to consume an expression that can take that collection of string values and decide how to map the values (or lack of values) into an instance of a type that can be consumed elsewhere
  • For the constructed types to encapsulate not just the incoming value(s), but also be able to express a lack of value(s) or an invalid state due to invalid value formats (as raising exceptions during the factory process will be difficult to catch)
  • For the constructed types to be automatically registered as scoped lifetime services in the container for use by controllers
  • For the constructed types to be available via a singleton that is aware of the async context so that can be injected into standard middleware constructors (by encapsulating the HttpContextAccessor).

The Code

The library is still an initial work in progress at time of writing, but if you are interested in how I have approached the above, the code is available in a repository at https://github.com/stevetalkscode/TypedHttpHeaders

Coming Up

As I progress with the library, I plan to have more posts in this series where I will be walking through how the code works in order to achieve the goals above and eventually hope to release a NuGet package.

C# 9 Record Factories

With the release of .NET 5.0 and C# 9 coming up in the next month, I have been updating my Dependency Injection talk to incorporate the latest features of the framework and language.

One of the topics of the talk is using the “Gang of Four” Factory pattern to help with the creation of class instances where some of the data is coming from the DI container and some from the caller.

In this post, I walk through using the Factory Pattern to apply the same principles to creating instances of C# 9 records.

So How Do Records Differ from Classes?

Before getting down into the weeds of using the Factory Pattern, a quick overview of how C#9 record types differ from classes.

There are plenty of articles, blog posts and videos available on the Internet that will tell you this, so I am not going to go into much depth here. However, here are a couple of resources I found useful in understanding records vs classes.

Firstly, I really like the video from Microsoft’s Channel 9 ‘On.NET’ show where Jared Parsons explains the benefits of records over classes.

Secondly, a shout out to Anthony Giretti for his blog post https://anthonygiretti.com/2020/06/17/introducing-c-9-records/ that goes into detail about the syntax of using record types.

From watching/reading these (and other bits and bobs around the Internet), my take on record types vs. classes are as follows:

  • Record types are ideal for read-only ‘value types’ such as Data Transfer Objects (DTOs) as immutable by default (especially if using positional declaration syntax)
  • Record types automatically provide (struct-like) value equality implementation so no need to write your own boiler plate for overriding default (object reference) equality behaviour
  • Record types automatically provide a deconstruction implementation so you don’t need to write your own boiler plate for that.
  • There is no need to explicitly write constructors if positional parameters syntax is used as object initialisers can be used instead

The thing that really impressed me when watching the video is the way that the record syntax is able to replace a 40 line class definition with all the aforementioned boiler plate code with a single  declaration

Why Would I Need A Factory?

In most cases where you may choose to use a record type over using a class type, you won’t need a factory.

Typically you will have some framework doing the work for you, be it a JSON deserialiser, Entity Framework or some code generation tool to create the boilerplate code for you (such as NSwag for creating client code from Swagger/OpenAPI definitions or possibly the new C#9 Source Generators).

Similarly, if all the properties of your record type can be derived from dependency injection, you can register your record type with the container with an appropriate lifetime (transient or singleton).

However, in some cases you may have a need to create a new record instance as part of a user/service interaction by requiring data from a combination of

  • user/service input;
  • other objects you currently have in context;
  • objects provided by dependency injection from the container.

You could instantiate a new instance in your code, but to use Steve ‘Ardalis’ Smith‘s phrase, “new is glue” and things become complicated if you need to make changes to the record type (such as adding additional mandatory properties).

In these cases, try to keep in line with the Don’t Repeat Yourself (DRY) principle and Single Responsibility Principle (SRP) from SOLID. This is where a factory class comes into its own as it becomes a single place to take all these inputs from multiple sources and use them together to create the new instance, providing a simpler interaction for your code to work with.

Creating the Record

When looking at the properties required for your record type, consider

  • Which parameters can be derived by dependency injection from the container and therefore do not need to be passed in from the caller
  • Which parameters are only known by the caller and cannot be derived from the container
  • Which parameters need to be created by performing some function or computation using these inputs that are not directly consumed by the record.

For example, a Product type may have the following properties

  • A product name – from some user input
  • A SKU value  – generated by some SKU generation function
  • A created date/time value – generated at the time the record is created
  • The name of the person who has created the value – taken from some identity source.

The record is declared as follows

You may have several places in your application where a product can be created, so adhering to the DRY principle, we want to encapsulate the creation process into a single process.

In addition, the only property coming from actual user input is the product name, so we don’t want to drag all these other dependencies around the application.

This is where the factory can be a major help.

Building a Record Factory

I have created an example at https://github.com/stevetalkscode/RecordFactory that you may want to download and follow along with for the rest of the post.

In my example, I am making the assumption that

  • the SKU generation is provided by a custom delegate function registered with the container (as it is a single function and therefore does not necessarily require a class with a method to represent it – if you are unfamiliar with this approach within dependency injection, have a look at my previous blog about using delegates with DI
  • the current date and time are generated by a class instance (registered as a singleton in the container) that may have several methods to choose from as to which is most appropriate (though again, for a single method, this could be represented by a delegate)
  • the user’s identity is provided by an ‘accessor’ type that is a singleton registered with the container that can retrieve user information from the current AsyncLocal context (E.g. in an ASP.NET application, via IHttpContextAccessor’s HttpContext.User).
  • All of the above can be injected from the container, leaving the Create method only needing the single parameter of the product name

(You may notice that there is not an implementation of GetCurrentTimeUtc. This is provided directly in the service registration process in the StartUp class below).

In order to make this all work in a unit test, the date/time generation and user identity generation are provided by mock implementations that are registered with the service collection. The service registrations in the StartUp class will only be applied if these registrations have not already taken place by making user of the TryAddxxx extension methods.

Keeping the Record Simple

As the record itself is a value type, it does not need to know how to obtain the information from these participants as it is effectively just a read-only Data Transfer Object (DTO).

Therefore, the factory will need to provide the following in order to create a record instance:

  • consume the SKU generator function referenced in its constructor and make a call to it when creating a new Product instance to get a new SKU
  • consume the DateTime abstraction in the constructor and call a method to get the current UTC date and time when creating a new Product instance
  • consume the user identity abstraction to get the user’s details via a delegate.

The factory will have one method Create() that will take a single parameter of productName (as all the other inputs are provided from within the factory class)

Hiding the Service Locator Pattern

Over the years, the Service Locator pattern has come to be recognised as an anti-pattern, especially when it requires that the caller has some knowledge of the container.

For example, it would be easy for the ProductFactory class to take a single parameter of IServiceProvider and make use of the GetRequiredService<T> extension method to obtain an instance from the container. If the factory is a public class, this would tie the implementation to the container technology (in this case, the Microsoft .NET Dependency Injection ‘conforming’ container).

In some cases, there may be no way of getting around this due to some problem in resolving an dependency. In particular, with factory classes, you may encounter difficulties where the factory is registered as a singleton but one or more dependencies are scoped or transient.

In these scenarios, there is a danger of the shorter-lived dependencies (transient and scoped) being resolved when the singleton is created and becoming ‘captured dependencies’ that are trapped for the lifetime of the singleton and not using the correctly scoped value when the Create method is called.

You may need to make these dependencies parameters of the Create method (see warning about scoped dependencies below), but in some cases, this then (mis)places a responsibility onto the caller of the method to obtain those dependencies (and thus creating an unnecessary dependency chain through the application).

There are two approaches that can be used in the scenario where a singleton has dependencies on lesser scoped instances.

Approach 1- Making the Service Locator Private

The first approach is to adopt the service locator pattern (by requiring the IServiceProvider as described above), but making the factory a private class within the StartUp class.

This hiding of the factory within the StartUp class also hides the service locator pattern from the outside world as the only way of instantiating the factory is through the container. The outside world will only be aware of the abstracted interface through which it has been registered and can be consumed in the normal manner from the container.

Approach 2 – Redirect to a Service Locator Embedded Within the Service Registration

The second way of getting around this is to add a level of indirection by using a custom delegate.

The extension methods for IServiceCollection allow for a service locator to be embedded within a service registration by using an expression that has access to the IServiceProvider.

To avoid the problem of a ‘captured dependency’ when injecting transient dependencies, we can register a delegate signature that wraps the service locator thus moving the service locator up into the registration process itself (as seen here) and leaving our factory unaware of the container technology.

At this point the factory may be made public (or left as private) as the custom delegate takes care of obtaining the values from the container when the Create method is called and not when the (singleton) factory is created, thus avoiding capturing the dependency.


Special warning about scoped dependencies

Wrapping access to transient registered dependencies with a singleton delegate works as both singleton and transient instances are resolved from the root provider. However, this does not work for dependencies registered as scoped lifetimes which are resolved from a scoped provider which the singleton does not have access to.

If you use the root provider, you will end up with a captured instance of the scoped type that is created when the singleton is created (as the container will not throw an exception unless scope checking is turned on).

Unfortunately, for these dependencies, you will still need to pass these to the Create method in most cases.

If you are using ASP.NET Core, there is a workaround (already partially illustrated above with accessing the User’s identity).

IHttpContextAcessor.HttpContext.RequestServices exposes the scoped container but is accessible from singleton services as IHttpContextAcessor is registered as a singleton service. Therefore, you could write a delegate that uses this to access the scoped dependencies via a delegate. My advice is to approach this with caution as you may find it hard to debug unexpected behaviour.


Go Create a Product

In the example, the factory class is registered as a singleton to avoid the cost of recreating it every time a new Product is needed.

The three dependencies are injected into the constructor, but as already mentioned the transient values are not captured at this point – we are only capturing the function pointers.

It is within the Create method that we call the delegate functions to obtain the real-time transient values that are then used with the caller provided productName parameter to then instantiate a Product instance.

 

 

Conclusion

I’ll leave things there, as the best way to understand the above is to step through the code at https://github.com/stevetalkscode/RecordFactory that accompanies this post.

Whilst a factory is not required for the majority of scenarios where you use a record type, it can be of help when you need to encapsulate the logic for gathering all the dependencies that are used to create an instance without polluting the record declaration itself.