Tag Archives: C#

Here be dragons: string concatenation

String concatenation can be done in several ways, each with their own advantages and usecases. In this blogpost I will take a closer look at 4 different ways of concatenating strings and how these are implemented internally. At the end of this post I hope I will have made clear when they are useful, when they are not and how they compare to eachother implementation-wise.

The approaches discussed are:

  • Simple concatenation
  • string.Concat()
  • StringBuilder
  • string.Join()

Simple concatenation

string a = "Hello" + "World";

String concatenation is commonly done using the + operator and passing one or two strings as its operands. Notice that something like string a = "abc" + new FileLoadException(); is perfectly legal: you’ll see in a bit why this is.

You might have been wondering what happens exactly when you use this form of concatenation. In order to find out, we have to look at the generated IL code. Now, if you simply look at the generated IL given the string above you will notice that it looks like this:

IL_0000: nop
IL_0001: ldstr "HelloWorld"
IL_0006: stloc.0 // a
IL_0007: ret

Because our string concatenation is considered a so-called “compile-time constant expression” the compiler turns it into a single string for us already.

We can bypass this optimization by defining it as two separate variables and concatenating these:

string x = "Hello";
string y = "World";
string a = x + y;

Looking at the IL again we now see this:

IL_0000: nop
IL_0001: ldstr "Hello"
IL_0006: stloc.0 // x
IL_0007: ldstr "World"
IL_000C: stloc.1 // y
IL_000D: ldloc.0 // x
IL_000E: ldloc.1 // y
IL_000F: call System.String.Concat
IL_0014: stloc.2 // a
IL_0015: ret

That’s more like it! We can tell from this that first the two strings are loaded into memory (separately!) and, more interestingly, they are put together using the string.Concat() method.


We’ve seen now that simple string concatenation results in a call to string.Concat(). Depending on the types passed in it will choose between string.Concat(string, string) (in the case of two strings) or string.Concat(object, object) (in the case of only one string). If you’ll replace the simple concatenation with a call to string.Format() you’ll notice that you receive the exact same IL.

At this point we can take a look at the internals and what goes on exactly. Looking at the source code of string.Concat(object, object) we can see it is a pass-through to string.Concat(string, string) by calling ToString() on both operands — even if one of these was already a string.

Our next step is more interesting: after the usual validation handling and fast tracks, we see a few very interesting methods being called:

String result = FastAllocateString(str0Length + str1.Length);
FillStringChecked(result, 0, str0);
FillStringChecked(result, str0Length, str1);

FastAllocateString(int) is an external method that will allocate the space needed to concatenate both strings which is evidently the sum of their lengths. Afterwards, FillStringChecked(string, int, string) will copy the contents from the given string (third parameter) at a certain index into the aggregate one we just allocated. At this point the aggregate string is filled and can be returned to the caller. You might have noticed that FastAllocateString returns a string and not a char array. This is important because it changes the way we have to fill it: with a char array we would be able to simply access each entry directly and insert the correct value. However since it is a string we can (have to?) use an unsafe context and pull out some C to copy the contents into the correct memory location. This has as benefit that you don’t have to loop (explicitly) to move the string around.

What about more concatenations?

If you look at the overloads you’ll notice that there are overloads with 2, 3 and 4 parameters and afer that you have to use the one with only one: a collection. We can see this in action when we try to concatenate 5 strings:

string v = "Strings";
string w = "Are";
string x = "Fun";
string y = "Hello";
string z = "World";
string a = v + w + x + y + z;

generates as IL:

This clearly shows us a new array is created (see instruction IL_0021) which is then passed to the string.Concat() call.
The reason there are these specific overloads for 3 and 4 arguments is performance: the most common scenarios of concatenating values include 2, 3 or 4 operands and as such these are treated separatedly to get performance gains in the majority of cases.

For this implementation we have to take a look at the string.Concat(params string[]) method. The implementation here is fairly straightforward: just as in the previous methods we calculate the total length of the resulting string and afterwards fill that up with the string.ConcatArray(string[], int) method. Also interesting to notice is the threading consideration by deep-copying the array of strings to a new, local one!


It’s time to add the notorious StringBuilder to the mix. We’ll work with a very simple scenario:

StringBuilder sb = new StringBuilder();
string a = sb.ToString();

A StringBuilder is in essence a wrapper class around an array of chars and each time you append something to it, it inserts the given string’s content (aka: the chars) in the next available space in the StringBuilder array. This follows a similar idea to string.Concat but the big benefit here is that the resulting array is maintained over multiple calls instead of a single call. You might see where I’m going with this: string.Concat() creates a new string object for each call. A StringBuilder however only does this when its ToString() method is called regardless of how often you call StringBuilder.Append(). The more objects you allocate, the more you strain the garbage collector and the sooner you trigger a garbage collection. Nobody likes collecting garbage if it could have been prevented altogether.

The obvious real-life scenario is when you use a loop.


Last but not least: string.Join(). Admittedly, this probably isn’t the usecase most people have in mind for this method but since it still fits, I thought it to be interesting to include in this overview. When we look at the code we notice something interesting: it uses a StringBuilder internally!

I believe you’ll find this to be a common sight: many methods that have to concatenate strings use a StringBuilder internally.


Reflecting on these four approaches we can group them under two actually different ones: string.Concat() and StringBuilder. We’ve also seen that string.Concat() creates a new string each time you concatenate it whereas StringBuilder delays this until the very end and only does it once. On the other hand: constructing a StringBuilder object, adding to it and then retrieving the result is much more verbose than a simple + operator.

You will have to decide for yourself what you consider acceptable but I personally only use StringBuilder when I loop over something. If I can do concatenating “manually” and it remains readable then it must mean there are so little strings that it would barely make a difference anyway (don’t forget that the StringBuilder is an object that needs to be allocated as well!).

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Quick tip: getting a variable’s data during debugging

Ever wanted a quick overview of an element in debug mode? If you want to know a value a couple of layers deep, you soon have something like this:

Debug layers
Debug layers

This poses several annoyances:

  • You have to keep your mouse within boundaries.
  • It obstructs your code.
  • You have to scroll to see all properties.
  • You can’t do anything else.

An alternative which I very much like is ?. That’s right, a simple question mark. Use this in combination with your immediate window and it will print out all the data you normally get from hovering. For example as you can see in the image below: ?_affectedProperties will show me the string representation of the list. I can also use more complicated expressions like ?_affectedProperties.ElementAt(0) to get the detailed info of deeper layers.

Immediate Window
Immediate Window

Also worth mentioning is “Object Exporter” which is an extension that does this for you.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

A few important Roslyn API helpers

The Roslyn API as implemented in RC2 has been out for a few months now and is likely to remain largely unchanged while they’re working on getting it released officially. I think it might be time to do a little write-up and identify the key-components of that API that you likely want to use when building your own diagnostics, syntax writers or anything else possible with the platform. Do note that I am approaching this with only a background in writing diagnostics so I might be missing out on helpers from a different spectrum of the API.

I, too, still come across a new API which I wasn’t aware of before so if you know of something that’s not here, let me know.


If you’ve ever tried to create your own syntax node, you’ll have noticed that you can’t just new() it up. Instead, we use the SyntaxFactory class which provides methods to create just about every kind of node you could imagine. A particularly interesting method here is SyntaxFactory.Parse*. This can take away a lot of pain involved in manually creating syntaxnodes. For example in one of my diagnostics I wanted to create a condition that checks a certain variable and compares it to null. I could either create a BinaryExpression, set its operator to SyntaxFactory.Token(SyntaxKind.ExclamationEqualsToken), create an identifier using SyntaxFactory.Identifier and eventually add the null token using SyntaxFactory.Token(SyntaxKind.NullKeyword).

Or I could just write SyntaxFactory.ParseExpression($"{identifier} != null");.

I won’t pretend there aren’t any performance implications of course but sometimes it’s hard to contain myself when I can write something really readable like this. I know, shame on me.


This one is closely related to a SyntaxNode but represents certain aspects of it: the SyntaxNode.Modifiers property, in the case of – say – a method, will be a list of SyntaxToken objects. These, too, are created using SyntaxFactory.Token().


This enum represents certain aspects of the syntax. Think of keywords, carriage returns, semicolons, comments, certain expressions or statements like a++, etc. You will also use this to construct certain tokens by passing them as an argument to SyntaxFactory.Token(SyntaxKind). Notice how the API is coming together? Eventually, it will allow you to create a new syntax tree with a fairly fluent API — and which is very readable!


We all like our code properly formatted and thankfully, we can let Roslyn do that for us! There are a few approaches here: if you’re using a Code Fix then all you need to do is tell Roslyn which nodes you want formatted and the Code Fix will call the formatter for you when reconstructing the tree. You can do this by calling .WithAdditionalAnnotations(Formatter.Annotation) on any node you want formatted. If you’re in an environment that doesn’t do this for you, simply call Formatter.FormatAsync() (or its synchronous brother). You can choose to use the annotation (which I highly recommend because of its ease-of-use) or you can specify the area to format through TextSpan objects (which each node has as a property).


This is one of those helpers that eluded me for a while. So far I have come across two major ways of creating the new syntax tree: either you change the syntax nodes themselves (though technically they return a new node with similar data considering everything is immutable) by calling .Replace*, .Insert* or .Remove* or you use the wonderful DocumentEditor class which takes away a lot of the pain from this process.

Certainly when you have multiple transformations to the document, this comes in really handy. If you’re going to change individual syntax nodes then the benefits don’t seem that big but once you start having more than just that one node then you quickly see lines of code decreasing by 50% or more (and the complexity follows a similar line). Another important note: if you want to change multiple syntax nodes on the same level in the tree (for example: two methods in the same class) then adapting one of them will invalidate’s the other one’s location if you add or remove characters. This will cause problems when you rewrite the tree: if changing method 1 creates a new tree with a few more characters to that method, method 2 will try to replace the tree at the wrong location. This might sound rather vague but if you ever have this problem, you will instantly recognize it. Suffice it to say that DocumentEditor takes care of this for you.


You’ve already seen .WithAdditionalAnnotations() to, well, add additional annotations to your node, token or trivia. Keep an eye on this pattern of .With* extension methods, you might find them to be very useful. Certainly when you’re constructing a new statement/expression/member/whatever which consists of multiple aspects, you’ll find these things. For example as part of my diagnostic that turns a single-line return method in an expression-bodied member (aka: int Method() => 42;) I had to use the existing method and turn it into a new one with this specific syntax. The code for this became very fluent to read through these statements:


This isn’t exactly part of the API but it is so powerful that I can’t omit it. The RoslynQuoter tool developed by Kirill Osenkov is absolutely amazing to work with. If you ever want to know how the tree for a certain snippet of code looks, simply put it in the tool and you get a detailed view of how it’s built up. Without this, I would have many extra hours of work trying to figure out how a tree looks by using the debugger to look at each node’s .Parent property. Luckily, no more!

I hope this helps you get started (or expands your knowledge) of important parts of the Roslyn API. If you’re interested in reading through more complete examples, you can always take a look at VSDiagnostics.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Introducing: RoslynTester

When you create a new solution using the “Diagnostics and Code Fix” template, 3 projects will be created:

  • The portable class library which contains your analyzers and code fix providers,
  • A unit test project
  • A Vsix project to install them as an extension

If you look at the 2nd project you will notice that it creates a few files for you from the get-go: classes like CodeFixVerifier, DiagnosticVerifier, DiagnosticResult, etc.
These classes can be used to unit test your analyzers very easily: you pass in your test scenario as a string, specify the resulting diagnostic you expect, optionally provide a transformed code snippet in case of a code fix and that’s it: you’re done.

I was very pleased by what MS provided but it left me with one annoying problem: that testing code will always stay the same unless I change it myself. Often this is an okay scenario but since Roslyn is under active development and the API can still change (and it does), this prevents me from upgrading. When you look at the version of the Roslyn binaries that are provided with that project template, you will notice that they are from Beta-1. At the time of writing, Roslyn has gone past that to Beta-2, RC1 and eventually RC2. Before I realized this, I took to the Roslyn issue tracker with a few issues that apparently couldn’t be reproduced. It was then that I realized that I might have an outdated installation.

When I upgraded from Beta-1 to RC2 (in intermediate stages) I noticed that some breaking changes were introduced: methods being renamed, types being removed, accessibility restricted, etc. This left me the choice between either diving in that code or sticking to an outdated API (with bugs). The choice wasn’t very hard and I managed to get everything working with the RC2 API. However because of the outdated project template I don’t want to have to manually update those files every time (and I wouldn’t want to force you to do it too either)!

Therefore I present to you: RoslynTester!

This small NuGet package is exactly what you think it is: those few classes cleaned up and put inside a NuGet package. Now I (and you) can simply remove the auto-generated test files and instead simply reference this package to get all the testing capabilities. I will try to keep this up-to-date with the most recent Roslyn NuGet package so none of this should be a problem anymore. If I come across scenarios where I can expand on it, I will do that too of course. In case you are interested in contributing something, feel free to take a look at the Github page!

Sidenote: if the NuGet package doesn’t show up in your NuGet browser, go to your settings and enable the NuGet v2 package feed. I’m still figuring out NuGet and for some reason it only shows up there.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Getting started with your first diagnostic

With the release of Visual Studio 2015 RC, we also received the pretty much final implementation of the Diagnostics implementation. This SDK allows us to create our own diagnostics to help us write proper code that’s being verified against those rules in real-time: you don’t have to perform the verification at a separate build-step. What’s more is that we can combine that with a code fix: a shortcut integrated in Visual Studio that provides us a solution to what we determine to be a problem.

This might sound a little abstract but if you’ve been using Visual Studio (and/or Resharper) then you know what I mean: have you ever written your classname as class rather than Class? This is a violation of the C# naming conventions and visual studio will warn you about it and provide you a quickfix to turn it into a proper capitalized word. This is exactly the behaviour we can create and which is integrated seemlessly in Visual Studio.

I gave this a go exactly 1 year ago but decided against continuing because there wasn’t a final version of it yet and it was really cumbersome to test. That combined with the fact that it was hardly usable (not many people had Visual Studio 2015 yet) made me wait until it had matured and was properly supported. Luckily, it seems that time has finally come.

The setup

In order to get started you need a few things installed:

You will notice that each of these downloads are specific for the RC version, which is what I’m using at the time of writing. Make sure that you don’t mix up CTP-X, RC and RTM versions since that is bound to create issues.

Creating an Analyzer

Even though the official template says “Diagnostic and CodeFix”, the term “Analyzer” is used equally, if not more, as “diagnostic”. I don’t believe there are hard conventions on this yet so use what you feel more natural. I personally prefer to append “Analyzer” to my.. diagnostics.

In Visual Studio, create a new project (and a solution if needed) using the “Diagnostic with Code Fix (NuGet + VSIX)” template, found under “Extensibility”. This immediately indicates a very useful feature: you will be able to distribute your analyzers using NuGet and/or you can choose to distribute them as an installable extension instead. This makes it very easy to deploy in your environment and share it with others should you want to.

Create the project
Create the project

Now on to the real work: we will create an analyzer that shows a warning when we throw an ArgumentException without passing a parameter to it. The use of this is that we will make sure that we’re never just going to throw such an exception without specifying what argument is being annoying in the first place.

After your template is created you will notice 3 projects in the solution: a portable class library which contains your analyzers and code fix providers, a test project and an extension project. You can ignore the latter one and we’ll focus on the first two.

When you look at your analyzer you see an entire example for you to start from. The nature of the diagnostic we will create however is such that we will analyze a syntax node, not a symbol. The code we implement is very straightforward:

I’ve disregarded globalization because it is an extra hurdle to readability and I don’t expect my project to ever become popular enough that it warrants different languages to be supported. Aside from that you’ll notice that I also used a lovely Expression Body for the SupportedDiagnostics method.

The first thing I did was register my analyzer on a a syntax node, a throw statement to be precise. This evidentely means that each time a throw statement is encountered when the tree is walked through, my analyzer will execute.

The actual implementation is very straightforward:

  • Verify the node is a throw statement
  • Verify the expression creates a new object
  • Verify the new object is of type ArgumentException
  • Verify there are no arguments passed to the constructor

And that’s it. If these 4 conditions are true, I believe there to be an empty ArgumentException call and I report a warning on that location.

Testing the analyzer

If you now set the Vsix project as your startup project and press the big green “Start” button, a new Visual Studio instance will be launched. Use it to create a new project and you will notice that your analyzer is included. You can now let yourself loose on all sorts of scenarios involving ArgumentExceptions!

However I wouldn’t be me if I didn’t look into unit testing instead. Luckily, this is very easily done with this release. In fact, it’s so easy that there’s really not much looking into: you create a test class that inherits from CodeFixVerifier, override the GetCSharpDiagnosticAnalyzer and GetCSharpCodeFixProvider as needed, you write source code in plain text and you use the helper functions VerifyCSharpDiagnostic and VerifyCSharpCodeFix to assert whether or not a diagnostic/code fix should occur at the given position. If nothing should occur you just pass in the source code as a string and if you do expect something, you pass in a DiagnosticResult.

In code:

That’s how easy it now is to create your own slim Resharper. Your code is evaluated against your rules as you type, you can write them yourself very easily and you can export them as needed. I will definitely port the few analyzers I created as a test last year and expand it with many others I can think of and I encourage you to do the same (or contribute).

For more information on how to get started with these analyzers I recommend a couple of resources:

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Hello Linux!

Like many other .NET developers I have been following the Build conference that’s going on right now. One of its biggest announcements (so far) was the release of Visual Studio Code and the accompanying CoreCLR for Linux and Mac. It sounds nice and all but I wanted to try this out myself. I have decided to get a Console Application working in Ubuntu 14.04 since we’ve all seen by now how to deploy an ASP.NET web application. While reading this post, keep in mind that I have basically never used Linux so it’s possible I took hoops that shouldn’t have been taken. In case I did, leave me a comment so I can learn from it. Note that in this blogpost I will be using the Mono runtime and not the .NET Core one. At the time of writing there was only documentation available for the former however you can always get started with .NET Core here.

One of the things I am pleasantly surprised with is that there are several Yeoman templates available to create a solution structure. Whereas Visual Studio does that for you, it would have been a serious pain to have to do this yourself each time you create a new project in Visual Studio Code.

Without further ado, let’s get to it!

The setup

I installed Ubuntu 14.04 on an old laptop, which means it’s an entirely fresh installation. If you have already been using Linux and/or Mono then you probably know what steps you can skip.

We start by following the instructions here. You can see that we follow the ASP.NET documentation even though we’re building a Console Application: the setup for either is very, very similar with just about two commands differently.

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list

sudo apt-get update

sudo apt-get install Mono-Complete

Afterwards it’s time to install the DNVM. For more information about the .NET Version Manager you can take a look here (Github) and here (MSDN).

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

Next up is NodeJS. This will allow us to install Yeoman and in turn generate the project templates.

sudo apt-get install nodejs

sudo apt-get install npm

One problem I had here was that there was a naming conflict between node and nodejs which are apparently different packages. This is solved by executing

sudo apt-get install nodejs-legacy

Creating the solution

Afterwards create your own directory where we store our project. I did this in $HOME/Documents/code/yo-aspnet.

Now that we have this we can generate our project structure by first installing yo:

sudo npm install -g yo

and subsequently the generator:

sudo npm install -g generator-aspnet

When this is done, it’s time to pick the kind of project template we want to generate. Start yo


and you will be prompted to select the kind of application you’re interested in. You should see a screen like this:

Choosing the solution template
Choosing the solution template

Use the arrows keys to select “Console Application” and press enter. Give your project a name (in my case: “HelloWorld”) and everything should now have been created:

Console solution is generated
Console solution is generated

Notice how it’s a very minimal template and only consists of a gitignore, a main class and the project configuration file. There is no assemblyinfo, no bin/obj folders, no app.config, etc.

More information about the generator can be found here.

Installing Visual Studio Code

Time to get that new editor up and running!

Go to https://code.visualstudio.com and download the zip file. Navigate to your Downloads folder and create a folder which will hold the unzipped content. I just left this in Downloads, you might as well put this elsewhere (which you probably should if you use Linux properly).

mkdir VSCode

unzip VSCode-linux-x64.zip -d VSCode

cd VSCode

And start the editor with


Here’s one tricky thing: if you now look at Visual Code, there’s a good chance you’re seeing something like “Cannot start Omnisharp because Mono version >=3.10.0 is required”. When you look at your own installed Mono version (mono --version) you’ll notice that you have 3.2.8 installed. Likewise if you now try to execute the HelloWorld app, you will receive TypeLoadException errors.

Luckily this can be easily solved: install the mono-devel package. This will overwrite your installed Mono with version 4.0.1 which is released just yesterday and everything will work flawlessly.

sudo apt-get install mono-devel

Executing the Console application

There’s just one last thing left to do: create a .NET execution environment and execute our app.

First create a default execution environment:

dnvm upgrade

and execute the app (from the directory where Program.cs is contained):

dnx . run

You should now see Hello World printed out!

Hello World!
Hello World!
Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

How to configure a custom IdentityUser for Entity Framework

I am getting accustomed to the ASP.NET Identity framework and let me just say that I love it. No more boring hassle with user accounts: all the traditional stuff is already there. However often you’ll find yourself wanting to expand on the default IdentityUser class and add your own fields to it. This was my use case as well here and since I couldn’t find any clear instructions on how this is done exactly, I decided to dive into it especially for you! Well, maybe a little bit for me as well.

The example will be straightforward: extend the the default user by adding a property that holds his date of birth and a collection of books. For this there are two simple classes:

The Book class is straightforward. The ApplicationUser class isn’t very complex either: inherit from IdentityUser to get the default user implementation. Furthermore there is the MyContext class which contains two tricky aspects:

First of all: notice how we inherit from IdentityDbContext<ApplicationUser>. The specialized DbContext is important because it provides us with all the user-related data from the database and the ApplicationUser type parameter is important because it defines what type the DbSet<T> Users will be defined as. Before I found out there was a generic variant of the context, I was trying to make it work with the non-generic type and separating user and userinformation: not pretty.

The second important aspect here is base.OnModelCreating(modelbuilder). If you do not do this, the configuration as defined in IdentityDbContext will not be applied. Since this isn’t necessary with a plain old DbContext, I figured it worth mentioning since I for one typically omit this call.

Finally all there is left is demonstrating how this is used exactly. This too is straightforward and requires no special code:

Notice how I use AsyncEx by Stephen Cleary to create an asynchronous context in my console application. After this you simply create a manager around the store which you pass your context to and voilà: your user is now inserted and everything works perfectly.

Resulting database

Notice how the date of birth is in the same table (AspNetUsers) as all other user-related data. The second table displays the books and the third the many-to-many table between users and books.

All things considered it is fairly straightforward as to how it works but there are a few tricky aspects that make you scratch your head if you’re not too familiar with the framework yet.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

How to unit-test your OWIN-configured OAuth2 implementation

I recently started implementing OAuth2 in a project by following this wonderful blog series by Taiseer Joudeh. So far everything is going great but one thing I’m missing is unit-tests to make sure everything works fine in the way that I’ve set it up. There’s a lot more to it than just configuring a few options for OAuth so I’d like to have peace of mind on this vital aspect of my application.

Unit-testing in isolated, minimal environments is great to demonstrate a concept but it’s not always as easy to implement them in an actual environment where there are a lot more components at play. This situation presented itself to me in the form of dependency injection: I have Unity wired up to provide me with repositories and contexts (all in the spirit of unit-testing the separate layers) but the issue arises when I have to adhere to limitations imposed to me by OWIN, Unity and proper unit-testing principles.

What follows is an example of how you can unit-test your OWIN-powered, Unity-injected Web Api 2 OAuth implementation. For the full implementation of this project you can take a look at Moviepicker on Github.

I will assume that you have followed the first part of Taiseer’s blogseries. However in order to make sure we are on the same page, here are some relevant pieces of code:

This is where we verify the user and client’s credentials. Notice that the client is always considered valid since at this point of the series we assume a single client. Later on, this is expanded to multiple clients.
One thing to note here is that I have a user repository injected in my provider: this abstraction on top of the database context allows me to set the stage in the unit-test.

Next up is the StartUp class which replaced the Global.asax entry point and – amongst other things – configures the OWIN layer.

What catches the eye here is the configuration of Unity, more specifically that it is separated from the rest of the configuration and is in fact overridable (notice the virtual keyword). Similarly, you can see that the provider being passed to the OAuth authorization options uses our DI-approach to retrieve the repository we decide to inject.

The reason I went with this approach is simply because I didn’t have another choice: I can’t inject a resource in the StartUp class simply because that class is the entry point (or at least, I don’t know of any. If you do, let me know!). So what do we do when we can’t inject our dependencies? That’s right, we mock it extract the injected behaviour and create a subclass to inject the test-specific behaviour like that. If you are up-to-date with your design pattern knowledge, you may recognize the Template Method Pattern in this.

At first sight this code must strike you as troublesome: there are static members in there! The reason why becomes more clear when you take a look at what the Microsoft.Owin.Testing package provides for us: creating an in-memory test server uses a creator pattern which does not allow you to pass in arguments, nor does it provide you any control over that startup configuration. In essence this means that we cannot inject our repositories the normal way.

This should make it more clear:

You may recognize the test-databasecontext creation approach in the Initialize() method since this was presented in my post on unit-testing with Effort. You can also see how we use these static fields exactly: before every test the repositories are recreated and assigned to our new configuration, essentially overwriting anything that might have remained from a previous run. Since MSTest executes test sequentially by default, this should not pose any troubles.

Looking at a particular test we can tell it is very straightforward to set up the OWIN middleware: create an in-memory server by passing it our configuration and.. you’re done. The Owin testing package provides you some helpful tools as well to make it more comfortable building the requests.

As always, the asserting package used is FluentAssertions and on top of that there are the Microsoft.Owin.Hosting, Microsoft.Owin.Host.HttpListener and Microsoft.Owin.Testing packages.

Testing your OAuth implementation can be done (should be done?) without opening your browser and trying every call in the pipeline just to make sure everything still works. By using the provided Microsoft.Owin.Testing package you can easily mimic the actual Owin layer although there are some hoops to jump through when you have additional complexity like dependency injection. Nevertheless I am very pleased by the way unit-testing is officially supported (take some notes, Entity-Framework!).

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Unit-testing Web Api routes and parameter validation

Let’s talk about routing. If you’ve ever developed a web application then you know the hassle you have with the constant “Resource not found” or “Multiple actions match the request” errors. What if I told you you could fix all this without ever having to open a browser?

That’s right: we’ll unit-test our routes! As an added bonus I’ll also show how you can unit-test parameter validation since that’s probably one of the most important things to do when creating a (public) API.

Setting up the environment

Create a new Web API project

Create Web API project

We don’t need any sort of hosting so you can just leave Azure unchecked. We’ll use ASP.NET MVC in our test project but not in the Web API itself so you can just use an empty project with Web API checked.

Create a model and backing repository

I will not go into deeper detail about a proper repository implementation but if you’re interested, you can always check my post about unit-testing Entity-Framework for more information about that layer of your project.

Implement a basic controller

Now that we’ve got that out of the way, let’s create our controller with a few API endpoints.

By specifying the [RoutePrefix] attribute on class level, we essentially end up with “api/books/{id}” and “api/books” as API endpoints. The [ResponseType] attribute has no functional difference but is used when generating documentation. Personally I prefer to always add it considering the actual return type is hidden behind the IHttpActionResult .

Set up the test environment

I like to use MVC Route Tester for this. As the name implies it is focused on ASP.NET MVC but works just fine for ASP.NET Web Api as well. Use NuGet to add MvcRouteTester.Mvc5.2, FluentAssertions and Microsoft ASP.NET MVC to your test project.

Creating our first tests

Now that the entire environment is setup, let’s take a look at a basic test. What we’ll do here is verify that our routing configuration has a route configured that corresponds with what we expect AND calls the method we expect it to. Let’s take a look at the code:

You’ll notice that we have to create a new HttpConfiguration object. This type’s name already conveys what it’s about: it contains the configuration of your HTTP server. The only aspect we care about is its routing purposes so we can just create an empty config without setting any properties. Once that is done, we inject it into our WebApi project by calling WebApiConfig.Register(HttpConfiguration) which you can find under the App_Start folder. Since it’s just a basic project it will generate the routes by mapping the attributes and the default route.

The contents of the tests are straightforward: first we test whether such a route exists and after that whether it is mapped to the correct method. Notice how it doesn’t matter what argument you pass in to BookController.InsertBook(Book): whether it’s null or new Book() won’t make a difference although you should be more wary about this when you have a scenario involving method overloading.

What about constraints?

New in Web Api 2 are Route constraints. You’ve already seen one of them in the form of {id:int} which indicates that only requests routing to that URL form and where the id can be parsed as an integer should be handled by that method.
As a way of showcasing this behaviour and proving that it can be tested, I will add two additional endpoints which take care of respectively the ids above 15 and below 15.

Testing is as easy as ever:

What about parameter validation testing?

One last important aspect to testing your API is verifying user input. One thing to realize here is that the ASP.NET framework does a lot for us when we deploy our website. You’ve already noticed that we explicitly have to create the HttpConfiguration object and inject that in our WebApiConfig. Now we’ll drop that aspect since we’ll not be testing what ASP.NET does but we still have to use some of its functions, more specifically the ability to validate the incoming object.

Luckily this can be done extremely easy by calling ApiController.Validate(object) which will look at each field and its attributes to determine validity.


That concludes this short overview on how to unit-test your public API. While some might argue the use of testing your routes I like the fact that I can be certain all of that works without even having to fire up my browser once. Certainly when you start with conditional routing this can be a very convenient way to make sure everything works as intended. The tests are executed very fast and take very little time to write which makes it all the more worth it.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Properly testing Entity-Framework with Effort

If you have ever needed to use a database in C# then chances are you ended up with Entity Framework as an ORM layer. It allows you to query the database with LINQ rather than explicit SQL queries making for a much more comfortable data layer. However if you’re also the kind of person that likes (has?) to write tests then you might have noticed that the DbContext class is rather annoying to work around.

Personally I am not a fan of mocking at all: it requires a lot of setup work which pollutes the test and it makes your code brittle (if the mock is configured to return X but you change the implementation to return Y, the tests will still work despite the discrepancy with the actual implementation).

The idea behind what you will read is straightforward: instead of using an actual database hosted locally or on Azure, we’ll create one in-memory for each test. This allows us to test each layer of our project in a fast manner without having to resort to creating a fake implementation of our actual code. There is a thin line between unit testing and integration testing that we may cross here but I don’t consider this a problem: fast, reliable and clean tests get precedence over ideologism in any scenario.

The first part of this article will take you through the steps to setup a generic project that uses Entity-Framework so if you want to go straight to implementing your tests, skip to part 2.

Without further ado, let me take you through the process of getting Entity-Framework in a testable state.

Setting up the solution

Create a new solution with a class library (“Database”) and a unit test project.

The former will be where we talk to the database and the latter will contain our tests.

Add the Book model and the IBookRepository interface.

Install Entity-Framework in your class library project and add your DbContext implementation.

Execute Enable-Migrations on your class library project.

Entity-Framework Migrations allows us to keep the database up-to-date with the model used in our code. You can find more information about this here.

Implement IBookRepository.

This will be a very basic implementation. I’m using constructor injection to pass in the right context to make sure the caller always passes in an object (for our own sake, let’s assume people don’t pass in null values.

Configuring the project for testing

Create the Unit Test project.

Add a reference to your class library project so you can use the LibraryContext object you created.

Install Effort.EF6 in your test project.

This library contains the magic that makes all this possible: Effort will allow us to easily create a new database in-memory. You’ll notice that it adds additional references to Entity-Framework and NMemory which basically describes its purpose already.

Create an additional LibraryContext constructor.

This constructor will allow us to pass in the connection created by Effort. Note the importance of the second parameter to be true: “The connection will not be disposed when the context is disposed if contextOwnsConnection is false“. Obviously we don’t want that connection to linger around when the context is already disposed, that would mean a lot of left-over trash. You’ll notice soon that the connection we create is explicitly chosen to be disposed after every test run, guaranteeing a clean test each time we run it.

Implement tests.

These three tests are basic and show how little you actually have to change your testing approach to use this. What catches the eye is DbConnectionFactory.CreateTransient(). The corresponding documentation is clear enough:

Creates a DbConnection object that rely on an in-memory database instance that lives during the connection object lifecycle. If the connection object is disposed or garbage collected, then underlying database will be garbage collected too.

You can see that our connection object is created in the Initialize() method and passed to the LibrayContext and subsequently the BookRepository. In essence this means that, since the connection is local and the context and repository are overwritten after each test, the connection and database will be disposed after each test. This results in entirely separated database instances for each test.

Note that for the tests I used FluentAssertions.


Of course, there just had to be problems along the way otherwise anyone would be using this already. The first one you’ll get it this:

System.InvalidOperationException: Migrations is enabled for context ‘LibraryContext’ but the database does not exist or contains no mapped tables. Use Migrations to create the database and its tables, for example by running the ‘Update-Database’ command from the Package Manager Console.

Evidently this is irrelevant to us since the entire idea behind our approach is that we don’t have a database to start with but instead create it each test. Entity-Framework notices it has migrations planned because it sees the configuration present in our Migrations folder so we can use this information to work around the issue: simply create a new Class Library project and move your Migrations folder to it. If you execute your tests again you will notice that they now all pass.

Didn’t you just break the Migration configuration?

Yes, I did. You can verify this (and do so to follow along) by adding a Console Application project and adding a reference to the “Database” project and Entity-Framework.
Afterwards, add some test code to use as your actual implementation:

If you now test your program with an actual implementation you’ll notice you might receive an InvalidOperationException. This happens because you don’t have a reference to EntityFramework.SqlServer in your console application project. Use the Reference Manager tool to manually browse to this dll and add it to your project. I prefer to take one from the bin folder in my “Database” project for ease but you might want to place it in a folder dedicated to sharing dll’s in your project.

Add a reference to the EntityFramework.SqlServer dll

Once this is done, you can now use the database with your existing model. But what happens when we want to add a field Author to it? Our Migrations configuration is dead so we’ll have to fix that first.

Update the model.

We’ll slightly update our Book model to account for an optional Author parameter like this:

Re-configuring Migrations

You’ll notice that if you now try to add a new migration, the Package Manager Console will only display errors: “The EntityFramework package is not installed on project ‘Migrations’.” and “No migrations configuration type was found in the assembly ‘Database’” depending on which project you target with your command.

First of all: install Entity-Framework in the Migrations project. If you now try to call Enable-Migrations it will tell you that it can’t find a context type inside this project (which is true, it’s in our “Database” project). Luckily we can specify the project in which it should look!

If you now use Enable-Migrations -ContextProjectName "Database" -Force it will configure Migrations with the context it finds in “Database”. The -Force is there to overwrite the existing Migration we moved to the Migrations project.

We can now add our changes using Add-Migration "author field" and a subsequent Update-Database.

Looking at the database we can see it added a new column to it:

author field added

If we change our console application a little bit to insert a book with an author specified, we see it works perfectly:

second author added

In order to verify our tests still work just fine we add a fourth that tests whether a book with author can be inserted:

Lo and behold: all tests pass without having to change anything in the test project; we can immediately test the changes in our model. Also notice how incredibly quick the tests have finished, taking into account that we basically recreated the database 4 times. Aside from a little warmup time, tests are executed pretty much instantly.



This article describes how you can create an in-memory database identical to the one in production and use it to test your exact interaction. Whether or not that is something you should want to do in the first place is up to you but I believe it very useful, certainly when there is more complicated logic inside your repositories. This is an important piece of code that should be tested but mocking is brittle at best in my eyes.

You can tell from the tests I’ve shown that there is very, very little noise in your tests which is an entirely different thing when you have to use mocks to return the correct data from every method call. It also prevents you from forgetting about mocking certain intertwined method calls which would then suddenly return faulty data because they use an actual implementation rather than one controlled by your testing environment. Effort makes sure this isn’t an issue since your entire test environment is under control by default.

If you’re interested in empirical evidence: I’ve used this in a small ASP.NET Web Api project with around 300 end-to-end tests (REST call comes in -> Interact with database -> Return data). Each of these tests created a new isolated database, interacted with it in some way and verified the responses that were returned. The overall execution time of the test suite was around 5 seconds.

All things considered I am a big fan of using this method to bypass mocks and verify the interaction with the database works as it should.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this