Here be dragons: string concatenation

String concatenation can be done in several ways, each with their own advantages and usecases. In this blogpost I will take a closer look at 4 different ways of concatenating strings and how these are implemented internally. At the end of this post I hope I will have made clear when they are useful, when they are not and how they compare to eachother implementation-wise.

The approaches discussed are:

  • Simple concatenation
  • string.Concat()
  • StringBuilder
  • string.Join()

Simple concatenation

string a = "Hello" + "World";

String concatenation is commonly done using the + operator and passing one or two strings as its operands. Notice that something like string a = "abc" + new FileLoadException(); is perfectly legal: you’ll see in a bit why this is.

You might have been wondering what happens exactly when you use this form of concatenation. In order to find out, we have to look at the generated IL code. Now, if you simply look at the generated IL given the string above you will notice that it looks like this:

IL_0000: nop
IL_0001: ldstr "HelloWorld"
IL_0006: stloc.0 // a
IL_0007: ret

Because our string concatenation is considered a so-called “compile-time constant expression” the compiler turns it into a single string for us already.

We can bypass this optimization by defining it as two separate variables and concatenating these:

string x = "Hello";
string y = "World";
string a = x + y;

Looking at the IL again we now see this:

IL_0000: nop
IL_0001: ldstr "Hello"
IL_0006: stloc.0 // x
IL_0007: ldstr "World"
IL_000C: stloc.1 // y
IL_000D: ldloc.0 // x
IL_000E: ldloc.1 // y
IL_000F: call System.String.Concat
IL_0014: stloc.2 // a
IL_0015: ret

That’s more like it! We can tell from this that first the two strings are loaded into memory (separately!) and, more interestingly, they are put together using the string.Concat() method.


We’ve seen now that simple string concatenation results in a call to string.Concat(). Depending on the types passed in it will choose between string.Concat(string, string) (in the case of two strings) or string.Concat(object, object) (in the case of only one string). If you’ll replace the simple concatenation with a call to string.Format() you’ll notice that you receive the exact same IL.

At this point we can take a look at the internals and what goes on exactly. Looking at the source code of string.Concat(object, object) we can see it is a pass-through to string.Concat(string, string) by calling ToString() on both operands — even if one of these was already a string.

Our next step is more interesting: after the usual validation handling and fast tracks, we see a few very interesting methods being called:

String result = FastAllocateString(str0Length + str1.Length);
FillStringChecked(result, 0, str0);
FillStringChecked(result, str0Length, str1);

FastAllocateString(int) is an external method that will allocate the space needed to concatenate both strings which is evidently the sum of their lengths. Afterwards, FillStringChecked(string, int, string) will copy the contents from the given string (third parameter) at a certain index into the aggregate one we just allocated. At this point the aggregate string is filled and can be returned to the caller. You might have noticed that FastAllocateString returns a string and not a char array. This is important because it changes the way we have to fill it: with a char array we would be able to simply access each entry directly and insert the correct value. However since it is a string we can (have to?) use an unsafe context and pull out some C to copy the contents into the correct memory location. This has as benefit that you don’t have to loop (explicitly) to move the string around.

What about more concatenations?

If you look at the overloads you’ll notice that there are overloads with 2, 3 and 4 parameters and afer that you have to use the one with only one: a collection. We can see this in action when we try to concatenate 5 strings:

string v = "Strings";
string w = "Are";
string x = "Fun";
string y = "Hello";
string z = "World";
string a = v + w + x + y + z;

generates as IL:

This clearly shows us a new array is created (see instruction IL_0021) which is then passed to the string.Concat() call.
The reason there are these specific overloads for 3 and 4 arguments is performance: the most common scenarios of concatenating values include 2, 3 or 4 operands and as such these are treated separatedly to get performance gains in the majority of cases.

For this implementation we have to take a look at the string.Concat(params string[]) method. The implementation here is fairly straightforward: just as in the previous methods we calculate the total length of the resulting string and afterwards fill that up with the string.ConcatArray(string[], int) method. Also interesting to notice is the threading consideration by deep-copying the array of strings to a new, local one!


It’s time to add the notorious StringBuilder to the mix. We’ll work with a very simple scenario:

StringBuilder sb = new StringBuilder();
string a = sb.ToString();

A StringBuilder is in essence a wrapper class around an array of chars and each time you append something to it, it inserts the given string’s content (aka: the chars) in the next available space in the StringBuilder array. This follows a similar idea to string.Concat but the big benefit here is that the resulting array is maintained over multiple calls instead of a single call. You might see where I’m going with this: string.Concat() creates a new string object for each call. A StringBuilder however only does this when its ToString() method is called regardless of how often you call StringBuilder.Append(). The more objects you allocate, the more you strain the garbage collector and the sooner you trigger a garbage collection. Nobody likes collecting garbage if it could have been prevented altogether.

The obvious real-life scenario is when you use a loop.


Last but not least: string.Join(). Admittedly, this probably isn’t the usecase most people have in mind for this method but since it still fits, I thought it to be interesting to include in this overview. When we look at the code we notice something interesting: it uses a StringBuilder internally!

I believe you’ll find this to be a common sight: many methods that have to concatenate strings use a StringBuilder internally.


Reflecting on these four approaches we can group them under two actually different ones: string.Concat() and StringBuilder. We’ve also seen that string.Concat() creates a new string each time you concatenate it whereas StringBuilder delays this until the very end and only does it once. On the other hand: constructing a StringBuilder object, adding to it and then retrieving the result is much more verbose than a simple + operator.

You will have to decide for yourself what you consider acceptable but I personally only use StringBuilder when I loop over something. If I can do concatenating “manually” and it remains readable then it must mean there are so little strings that it would barely make a difference anyway (don’t forget that the StringBuilder is an object that needs to be allocated as well!).

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Launching your first webapp in Visual Studio Code through Gulp

I figured it’s about time I get a little more experienced with AngularJS. I want to create a website for my project VSDiagnostics and I plan on working with this technology at my internship, so it’s time to jump on the hype train.

What you’ll read here is just a quick overview of setting up gulp and Visual Studio Code to get your first AngularJS app working.

Download Visual Studio Code

This is our editor of choice. If you prefer to use something else then that’s fine — anything that can write plain text files is fine, really. I would suggest Notepad++ or Sublime Text as an alternative.

Visual Studio Code

Install NodeJS

The Node Package Manager (npm) will help us retrieve all the dependencies we’re interested in. Likewise, if you ever want to add a NodeJS backend — this will be needed.

Create a directory structure

I created my folder structure like this:

Write code

I’m working through a book on AngularJS myself so the little bit of AngularJS code here is just the first two examples shown in that book. It’s a simple demonstration of model-binding — a powerful feature of the AngularJS framework.

Once all this is done, I bet you’re interested to see what we just created. However if you’ll look around in the Visual Studio Code IDE, you’ll notice that there is no ‘run’ button or anything similar to it.

What there is, however, is a possibility to run a ‘task’. It might be best explained on the official documentation:

Lots of tools exist to automate tasks like building, packaging, testing or deploying software systems. Examples include Make, Ant, Gulp, Jake, Rake and MSBuild.

These tools are mostly run from the command line and automate jobs outside the inner software development loop (edit, compile, test and debug). Given their importance in the development lifecycle, it is very helpful to be able run them and analyze their results from within VS Code.

Here you can use whichever you like most (and is appropriate). I personally decided on Gulp for no particular reason: it’s a funny word and I vaguely recall using it in a class some time ago.

Running a task

In order to start a task (which in our scenario will consist of simply firing up our browser with the app we’re creating), you have to search for it. In VS Code, pres the key combination [Ctrl] + [Shift] + [P]. If you now search for ‘run task’, you will notice that it gives you the option to choose “Tasks: Run Task”, but with nothing to select from the dropdown menu.

In order to create our task, we have to get started with Gulp first. With Gulp we will define our task after which it will be available to us in the beforementioned dropdown menu.

Initializing package.json

Package.json is essentially your npm configuration. It contains metadata about your application like name, version, author as well as dependencies, files, etc. You could do without it but that would mean our dependencies wouldn’t be saved along with the application (and I do like that)!

In order to do so, issue the following commands:

You will now have to enter some information to setup the package.json file and after that we can get started (and finished) with Gulp.

Installing Gulp

We need two Gulp-related packages to do what we want it to do: Gulp and Gulp-Open. The first one will provide the general Gulp environment while the second one will provide a way for us to open the browser.

The following commands will install these packages:

Setting up gulpfile.js

Now it’s time to create our Task. In order to do so, go to your project’s root folder (in my case: \Examples\) and create a new file called gulpfile.js.

Afterwards, add the following contents:

What it does is simple: create a new task called ‘default’ and make it open the app.html file. This will prompt you for the default program to open .html files with (if you didn’t have that set already) and subsequently open it with that program.

Launching the browser

The last step is simply executing the given task. You have two options here: either you use the beforementioned [Ctrl] + [Shift] + [P] method which will spawn a second window with some console output, or you simply enter $ gulp in the command prompt at the location of your gulpfile.

This will give you an output that resembles this


and also opens app.html in your favorite browser.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Generating a NuGet package in a localized directory on each build

MSBuild can be a very powerful tool to handle, well, your build process. I will say up front that I haven’t got much experience with it but the little experience I do have, tells me that there are many possibilities to customize your workflow.

The situation that brought me in closer contact with MSBuild is the following: when you create a new project using the Diagnostic with Code Fix template, it will also include a .nuspec file. This file contains the metadata needed to release something on NuGet (name, description, version, release notes, dependencies, etc). It will essentially package your nuspec file into a nupkg file which can in turn be used to publish it onto some feed of your choice. The code in question is the following:

What happens is straightforward: if there is a file called Diagnostic.nuspec and NuGet.exe is packaged with our project, called NuGet.exe and pack the nuspec file into the root directory.

Local feeds

One of the features of NuGet is that it can retrieve packages from different sources and not just the official feed. This is particularly interesting because we can also point to a local folder as our NuGet source! Doing so allows us to manually verify everything works with the NuGet package by using it locally and only publishing it after we are satisfied. However I’m not interested in having to create a separate feed for each project (remember that the OutputDirectory specifies the project root folder).

One more restriction to keep in mind: I don’t want to localize my project’s build structure to work on just my machine. Anyone should be able to pull the repo and get started which means I can’t just hardcode a directory in the MSBuild script.

The problem: how do you create a local NuGet feed that gets the latest version of a NuGet package as soon as I build it, in a location of my choice while keeping other devs in mind?

The solution

The solution I came up with (through some nods on SO) is very straightforward: if a certain environment variable is set, use that variable’s value as the outputdirectory. If it’s not set, use the root. This makes sure that by-default, people will have the same behaviour as a clean project. Only when you want custom behaviour, you should change it. I liked this result a lot because it meant that the project could still be pulled and built cleanly without having to do additional configuration steps (nobody likes doing that for a project you haven’t even started yet).

I added an environment variable with key NUGETLOCAL and value "C:\Users\jeroen\Google Drive\NugetLocal\VSDiagnostics" to my user-level system variables. Note that I added quotes around the value!

Environment variables
Environment variables

All that’s left now is adapting the .csproj file to account for this alternative action:

The code speaks for itself: if the value retrieved from the NUGETLOCAL environment variable is not empty (aka: it has a value), use that as outputdirectory. If it’s empty, use the root.

All that’s left now is setting up the local NuGet feed. You’ll notice that I pointed it to the /Google Drive/NugetLocal folder so I’ll do exactly that under Tools -> Options -> NuGet Package Manager -> Package Sources:

Local feed settings
Local feed settings

Using that feed is now a matter of selecting the correct package source in the NuGet package manager and all local feeds from that folder will show up:

Local feed results
Local feed results
Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Quick tip: getting a variable’s data during debugging

Ever wanted a quick overview of an element in debug mode? If you want to know a value a couple of layers deep, you soon have something like this:

Debug layers
Debug layers

This poses several annoyances:

  • You have to keep your mouse within boundaries.
  • It obstructs your code.
  • You have to scroll to see all properties.
  • You can’t do anything else.

An alternative which I very much like is ?. That’s right, a simple question mark. Use this in combination with your immediate window and it will print out all the data you normally get from hovering. For example as you can see in the image below: ?_affectedProperties will show me the string representation of the list. I can also use more complicated expressions like ?_affectedProperties.ElementAt(0) to get the detailed info of deeper layers.

Immediate Window
Immediate Window

Also worth mentioning is “Object Exporter” which is an extension that does this for you.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

A few important Roslyn API helpers

The Roslyn API as implemented in RC2 has been out for a few months now and is likely to remain largely unchanged while they’re working on getting it released officially. I think it might be time to do a little write-up and identify the key-components of that API that you likely want to use when building your own diagnostics, syntax writers or anything else possible with the platform. Do note that I am approaching this with only a background in writing diagnostics so I might be missing out on helpers from a different spectrum of the API.

I, too, still come across a new API which I wasn’t aware of before so if you know of something that’s not here, let me know.


If you’ve ever tried to create your own syntax node, you’ll have noticed that you can’t just new() it up. Instead, we use the SyntaxFactory class which provides methods to create just about every kind of node you could imagine. A particularly interesting method here is SyntaxFactory.Parse*. This can take away a lot of pain involved in manually creating syntaxnodes. For example in one of my diagnostics I wanted to create a condition that checks a certain variable and compares it to null. I could either create a BinaryExpression, set its operator to SyntaxFactory.Token(SyntaxKind.ExclamationEqualsToken), create an identifier using SyntaxFactory.Identifier and eventually add the null token using SyntaxFactory.Token(SyntaxKind.NullKeyword).

Or I could just write SyntaxFactory.ParseExpression($"{identifier} != null");.

I won’t pretend there aren’t any performance implications of course but sometimes it’s hard to contain myself when I can write something really readable like this. I know, shame on me.


This one is closely related to a SyntaxNode but represents certain aspects of it: the SyntaxNode.Modifiers property, in the case of – say – a method, will be a list of SyntaxToken objects. These, too, are created using SyntaxFactory.Token().


This enum represents certain aspects of the syntax. Think of keywords, carriage returns, semicolons, comments, certain expressions or statements like a++, etc. You will also use this to construct certain tokens by passing them as an argument to SyntaxFactory.Token(SyntaxKind). Notice how the API is coming together? Eventually, it will allow you to create a new syntax tree with a fairly fluent API — and which is very readable!


We all like our code properly formatted and thankfully, we can let Roslyn do that for us! There are a few approaches here: if you’re using a Code Fix then all you need to do is tell Roslyn which nodes you want formatted and the Code Fix will call the formatter for you when reconstructing the tree. You can do this by calling .WithAdditionalAnnotations(Formatter.Annotation) on any node you want formatted. If you’re in an environment that doesn’t do this for you, simply call Formatter.FormatAsync() (or its synchronous brother). You can choose to use the annotation (which I highly recommend because of its ease-of-use) or you can specify the area to format through TextSpan objects (which each node has as a property).


This is one of those helpers that eluded me for a while. So far I have come across two major ways of creating the new syntax tree: either you change the syntax nodes themselves (though technically they return a new node with similar data considering everything is immutable) by calling .Replace*, .Insert* or .Remove* or you use the wonderful DocumentEditor class which takes away a lot of the pain from this process.

Certainly when you have multiple transformations to the document, this comes in really handy. If you’re going to change individual syntax nodes then the benefits don’t seem that big but once you start having more than just that one node then you quickly see lines of code decreasing by 50% or more (and the complexity follows a similar line). Another important note: if you want to change multiple syntax nodes on the same level in the tree (for example: two methods in the same class) then adapting one of them will invalidate’s the other one’s location if you add or remove characters. This will cause problems when you rewrite the tree: if changing method 1 creates a new tree with a few more characters to that method, method 2 will try to replace the tree at the wrong location. This might sound rather vague but if you ever have this problem, you will instantly recognize it. Suffice it to say that DocumentEditor takes care of this for you.


You’ve already seen .WithAdditionalAnnotations() to, well, add additional annotations to your node, token or trivia. Keep an eye on this pattern of .With* extension methods, you might find them to be very useful. Certainly when you’re constructing a new statement/expression/member/whatever which consists of multiple aspects, you’ll find these things. For example as part of my diagnostic that turns a single-line return method in an expression-bodied member (aka: int Method() => 42;) I had to use the existing method and turn it into a new one with this specific syntax. The code for this became very fluent to read through these statements:


This isn’t exactly part of the API but it is so powerful that I can’t omit it. The RoslynQuoter tool developed by Kirill Osenkov is absolutely amazing to work with. If you ever want to know how the tree for a certain snippet of code looks, simply put it in the tool and you get a detailed view of how it’s built up. Without this, I would have many extra hours of work trying to figure out how a tree looks by using the debugger to look at each node’s .Parent property. Luckily, no more!

I hope this helps you get started (or expands your knowledge) of important parts of the Roslyn API. If you’re interested in reading through more complete examples, you can always take a look at VSDiagnostics.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Introducing: VSDiagnostics

I am happy to announce the first release of VSDiagnostics! This project is a group of diagnostics meant for Visual Studio 2015 (and up?) which will help the developer adhere to best practices and avoid common pitfalls.

These are a few examples of the scenarios currently supported:

If statements without braces


String.Empty instead of an empty string


ArgumentException that can use nameof()


A catch clause that catches a NullReferenceException


Throwing an empty ArgumentException


Catching Exception without other catch clauses


And many more!

For the full list, take a look at the Github page. If you have a suggestion in mind or you are interested in contributing, let me know: I want this to be a community powered project. I hope this first release already proves helpful to you and I’m eager to hear your feedback and criticism.

How do I use this?

Simply create a new project using Visual Studio 2015 RC and add the NuGet package! If you don’t find it: make sure you’re also looking in the NuGet V2 package source.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Introducing: RoslynTester

When you create a new solution using the “Diagnostics and Code Fix” template, 3 projects will be created:

  • The portable class library which contains your analyzers and code fix providers,
  • A unit test project
  • A Vsix project to install them as an extension

If you look at the 2nd project you will notice that it creates a few files for you from the get-go: classes like CodeFixVerifier, DiagnosticVerifier, DiagnosticResult, etc.
These classes can be used to unit test your analyzers very easily: you pass in your test scenario as a string, specify the resulting diagnostic you expect, optionally provide a transformed code snippet in case of a code fix and that’s it: you’re done.

I was very pleased by what MS provided but it left me with one annoying problem: that testing code will always stay the same unless I change it myself. Often this is an okay scenario but since Roslyn is under active development and the API can still change (and it does), this prevents me from upgrading. When you look at the version of the Roslyn binaries that are provided with that project template, you will notice that they are from Beta-1. At the time of writing, Roslyn has gone past that to Beta-2, RC1 and eventually RC2. Before I realized this, I took to the Roslyn issue tracker with a few issues that apparently couldn’t be reproduced. It was then that I realized that I might have an outdated installation.

When I upgraded from Beta-1 to RC2 (in intermediate stages) I noticed that some breaking changes were introduced: methods being renamed, types being removed, accessibility restricted, etc. This left me the choice between either diving in that code or sticking to an outdated API (with bugs). The choice wasn’t very hard and I managed to get everything working with the RC2 API. However because of the outdated project template I don’t want to have to manually update those files every time (and I wouldn’t want to force you to do it too either)!

Therefore I present to you: RoslynTester!

This small NuGet package is exactly what you think it is: those few classes cleaned up and put inside a NuGet package. Now I (and you) can simply remove the auto-generated test files and instead simply reference this package to get all the testing capabilities. I will try to keep this up-to-date with the most recent Roslyn NuGet package so none of this should be a problem anymore. If I come across scenarios where I can expand on it, I will do that too of course. In case you are interested in contributing something, feel free to take a look at the Github page!

Sidenote: if the NuGet package doesn’t show up in your NuGet browser, go to your settings and enable the NuGet v2 package feed. I’m still figuring out NuGet and for some reason it only shows up there.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Getting started with your first diagnostic

With the release of Visual Studio 2015 RC, we also received the pretty much final implementation of the Diagnostics implementation. This SDK allows us to create our own diagnostics to help us write proper code that’s being verified against those rules in real-time: you don’t have to perform the verification at a separate build-step. What’s more is that we can combine that with a code fix: a shortcut integrated in Visual Studio that provides us a solution to what we determine to be a problem.

This might sound a little abstract but if you’ve been using Visual Studio (and/or Resharper) then you know what I mean: have you ever written your classname as class rather than Class? This is a violation of the C# naming conventions and visual studio will warn you about it and provide you a quickfix to turn it into a proper capitalized word. This is exactly the behaviour we can create and which is integrated seemlessly in Visual Studio.

I gave this a go exactly 1 year ago but decided against continuing because there wasn’t a final version of it yet and it was really cumbersome to test. That combined with the fact that it was hardly usable (not many people had Visual Studio 2015 yet) made me wait until it had matured and was properly supported. Luckily, it seems that time has finally come.

The setup

In order to get started you need a few things installed:

You will notice that each of these downloads are specific for the RC version, which is what I’m using at the time of writing. Make sure that you don’t mix up CTP-X, RC and RTM versions since that is bound to create issues.

Creating an Analyzer

Even though the official template says “Diagnostic and CodeFix”, the term “Analyzer” is used equally, if not more, as “diagnostic”. I don’t believe there are hard conventions on this yet so use what you feel more natural. I personally prefer to append “Analyzer” to my.. diagnostics.

In Visual Studio, create a new project (and a solution if needed) using the “Diagnostic with Code Fix (NuGet + VSIX)” template, found under “Extensibility”. This immediately indicates a very useful feature: you will be able to distribute your analyzers using NuGet and/or you can choose to distribute them as an installable extension instead. This makes it very easy to deploy in your environment and share it with others should you want to.

Create the project
Create the project

Now on to the real work: we will create an analyzer that shows a warning when we throw an ArgumentException without passing a parameter to it. The use of this is that we will make sure that we’re never just going to throw such an exception without specifying what argument is being annoying in the first place.

After your template is created you will notice 3 projects in the solution: a portable class library which contains your analyzers and code fix providers, a test project and an extension project. You can ignore the latter one and we’ll focus on the first two.

When you look at your analyzer you see an entire example for you to start from. The nature of the diagnostic we will create however is such that we will analyze a syntax node, not a symbol. The code we implement is very straightforward:

I’ve disregarded globalization because it is an extra hurdle to readability and I don’t expect my project to ever become popular enough that it warrants different languages to be supported. Aside from that you’ll notice that I also used a lovely Expression Body for the SupportedDiagnostics method.

The first thing I did was register my analyzer on a a syntax node, a throw statement to be precise. This evidentely means that each time a throw statement is encountered when the tree is walked through, my analyzer will execute.

The actual implementation is very straightforward:

  • Verify the node is a throw statement
  • Verify the expression creates a new object
  • Verify the new object is of type ArgumentException
  • Verify there are no arguments passed to the constructor

And that’s it. If these 4 conditions are true, I believe there to be an empty ArgumentException call and I report a warning on that location.

Testing the analyzer

If you now set the Vsix project as your startup project and press the big green “Start” button, a new Visual Studio instance will be launched. Use it to create a new project and you will notice that your analyzer is included. You can now let yourself loose on all sorts of scenarios involving ArgumentExceptions!

However I wouldn’t be me if I didn’t look into unit testing instead. Luckily, this is very easily done with this release. In fact, it’s so easy that there’s really not much looking into: you create a test class that inherits from CodeFixVerifier, override the GetCSharpDiagnosticAnalyzer and GetCSharpCodeFixProvider as needed, you write source code in plain text and you use the helper functions VerifyCSharpDiagnostic and VerifyCSharpCodeFix to assert whether or not a diagnostic/code fix should occur at the given position. If nothing should occur you just pass in the source code as a string and if you do expect something, you pass in a DiagnosticResult.

In code:

That’s how easy it now is to create your own slim Resharper. Your code is evaluated against your rules as you type, you can write them yourself very easily and you can export them as needed. I will definitely port the few analyzers I created as a test last year and expand it with many others I can think of and I encourage you to do the same (or contribute).

For more information on how to get started with these analyzers I recommend a couple of resources:

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

Hello Linux!

Like many other .NET developers I have been following the Build conference that’s going on right now. One of its biggest announcements (so far) was the release of Visual Studio Code and the accompanying CoreCLR for Linux and Mac. It sounds nice and all but I wanted to try this out myself. I have decided to get a Console Application working in Ubuntu 14.04 since we’ve all seen by now how to deploy an ASP.NET web application. While reading this post, keep in mind that I have basically never used Linux so it’s possible I took hoops that shouldn’t have been taken. In case I did, leave me a comment so I can learn from it. Note that in this blogpost I will be using the Mono runtime and not the .NET Core one. At the time of writing there was only documentation available for the former however you can always get started with .NET Core here.

One of the things I am pleasantly surprised with is that there are several Yeoman templates available to create a solution structure. Whereas Visual Studio does that for you, it would have been a serious pain to have to do this yourself each time you create a new project in Visual Studio Code.

Without further ado, let’s get to it!

The setup

I installed Ubuntu 14.04 on an old laptop, which means it’s an entirely fresh installation. If you have already been using Linux and/or Mono then you probably know what steps you can skip.

We start by following the instructions here. You can see that we follow the ASP.NET documentation even though we’re building a Console Application: the setup for either is very, very similar with just about two commands differently.

sudo apt-key adv --keyserver --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

echo "deb wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list

sudo apt-get update

sudo apt-get install Mono-Complete

Afterwards it’s time to install the DNVM. For more information about the .NET Version Manager you can take a look here (Github) and here (MSDN).

curl -sSL | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/

Next up is NodeJS. This will allow us to install Yeoman and in turn generate the project templates.

sudo apt-get install nodejs

sudo apt-get install npm

One problem I had here was that there was a naming conflict between node and nodejs which are apparently different packages. This is solved by executing

sudo apt-get install nodejs-legacy

Creating the solution

Afterwards create your own directory where we store our project. I did this in $HOME/Documents/code/yo-aspnet.

Now that we have this we can generate our project structure by first installing yo:

sudo npm install -g yo

and subsequently the generator:

sudo npm install -g generator-aspnet

When this is done, it’s time to pick the kind of project template we want to generate. Start yo


and you will be prompted to select the kind of application you’re interested in. You should see a screen like this:

Choosing the solution template
Choosing the solution template

Use the arrows keys to select “Console Application” and press enter. Give your project a name (in my case: “HelloWorld”) and everything should now have been created:

Console solution is generated
Console solution is generated

Notice how it’s a very minimal template and only consists of a gitignore, a main class and the project configuration file. There is no assemblyinfo, no bin/obj folders, no app.config, etc.

More information about the generator can be found here.

Installing Visual Studio Code

Time to get that new editor up and running!

Go to and download the zip file. Navigate to your Downloads folder and create a folder which will hold the unzipped content. I just left this in Downloads, you might as well put this elsewhere (which you probably should if you use Linux properly).

mkdir VSCode

unzip -d VSCode

cd VSCode

And start the editor with


Here’s one tricky thing: if you now look at Visual Code, there’s a good chance you’re seeing something like “Cannot start Omnisharp because Mono version >=3.10.0 is required”. When you look at your own installed Mono version (mono --version) you’ll notice that you have 3.2.8 installed. Likewise if you now try to execute the HelloWorld app, you will receive TypeLoadException errors.

Luckily this can be easily solved: install the mono-devel package. This will overwrite your installed Mono with version 4.0.1 which is released just yesterday and everything will work flawlessly.

sudo apt-get install mono-devel

Executing the Console application

There’s just one last thing left to do: create a .NET execution environment and execute our app.

First create a default execution environment:

dnvm upgrade

and execute the app (from the directory where Program.cs is contained):

dnx . run

You should now see Hello World printed out!

Hello World!
Hello World!
Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this

How to configure a custom IdentityUser for Entity Framework

I am getting accustomed to the ASP.NET Identity framework and let me just say that I love it. No more boring hassle with user accounts: all the traditional stuff is already there. However often you’ll find yourself wanting to expand on the default IdentityUser class and add your own fields to it. This was my use case as well here and since I couldn’t find any clear instructions on how this is done exactly, I decided to dive into it especially for you! Well, maybe a little bit for me as well.

The example will be straightforward: extend the the default user by adding a property that holds his date of birth and a collection of books. For this there are two simple classes:

The Book class is straightforward. The ApplicationUser class isn’t very complex either: inherit from IdentityUser to get the default user implementation. Furthermore there is the MyContext class which contains two tricky aspects:

First of all: notice how we inherit from IdentityDbContext<ApplicationUser>. The specialized DbContext is important because it provides us with all the user-related data from the database and the ApplicationUser type parameter is important because it defines what type the DbSet<T> Users will be defined as. Before I found out there was a generic variant of the context, I was trying to make it work with the non-generic type and separating user and userinformation: not pretty.

The second important aspect here is base.OnModelCreating(modelbuilder). If you do not do this, the configuration as defined in IdentityDbContext will not be applied. Since this isn’t necessary with a plain old DbContext, I figured it worth mentioning since I for one typically omit this call.

Finally all there is left is demonstrating how this is used exactly. This too is straightforward and requires no special code:

Notice how I use AsyncEx by Stephen Cleary to create an asynchronous context in my console application. After this you simply create a manager around the store which you pass your context to and voilĂ : your user is now inserted and everything works perfectly.

Resulting database

Notice how the date of birth is in the same table (AspNetUsers) as all other user-related data. The second table displays the books and the third the many-to-many table between users and books.

All things considered it is fairly straightforward as to how it works but there are a few tricky aspects that make you scratch your head if you’re not too familiar with the framework yet.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on TumblrShare on Google+Share on RedditFlattr the authorDigg this