Shouldly: Why would you assert any other way?

09 Oct 2016

I like to make my development life as easy as possible - and removing small irritations is a great way of doing this. Having used Shouldly in anger for a long time, I have to say I feel a little hamstrung when going back to just using NUnit’s assertions.

I have been known on a couple of projects which use only NUnit assertions, when trying to solve a test failure with array differences, to install Shouldly, fix the test, then remove Shouldly again!

The rest of this post goes through the different assertion models, and how they differ from each other and, eventually, why everyone should be using Shouldly!

The Most Basic

var valueOne = "Something";
var valueTwo = "Something else";

Debug.Assert(valueOne == valueTwo);
Debug.Assert(valueOne == valueTwo, $"{valueOne} should have been {valueTwo}");

This is an assertion at it’s most basic. It will only assert if the condition is false, and optionally you can specify a 2nd parameter with a message.

This has a couple of good points to it. No external dependencies are required, and it is strong typed (as your condition has to compile.) The down sides to this are that it is not very descriptive, and can only be used in Debug compiles (or with the DEBUG constant defined), meaning a Release mode build cannot be tested with this.

This also suffers from the descriptiveness problem - an output from this will only have a message saying an assertion failed, rather than anything helpful in figuring out why an assertion failed.

NUnit’s First Attempt

var valueOne = "Something";
var valueTwo = "Something else";

Assert.AreEqual(valueOne, valueTwo);
Assert.AreEqual(valueOne, valueTwo, $"{valueOne} should have been {valueTwo}");

This improves on the Most Basic version by working in Release mode builds, and as it only depends on the test framework, it doesn’t add a dependency you didn’t already have.

There are two things I dislike about this method: it remains as undescriptive as the first method, and it adds the problem of parameter ambiguity: Which of the two parameters is the expected value, and which is the value under test? You can’t tell without checking the method declaration. While this is a small issue, it can cause headaches when you are trying to debug a test which has started failing, only to discover the assertion being the wrong way around was leading you astray!

NUnit’s Second Attempt

var valueOne = "Something";
var valueTwo = "Something else";

Assert.That(valueOne, Is.EqualTo(valueTwo));
Assert.That(valueOne, Is.EqualTo(valueTwo), $"{valueOne} should have been {valueTwo}");

This is an interesting attempt at readability. On the one hand, it’s very easy to read as a sentence, but it is very wordy, especially if you are wanting to do a Not equals Is.Not.EqualTo(valueTwo).

This biggest problem with this however, is the complete loss of strong typing - both arguments are object. This can trip you up when testing things such as Guids - especially if one of the values gets .ToString() on it at some point:

var id = Guid.NewGuid();
Assert.That(id.ToString(), Is.EqualTo(id));

Not only will this compile, but when the test fails, unless you are paying close attention to the output, it will look like it should’ve passed, as the only difference is the " on either side of one of the values.

Shouldly’s Version

var valueOne = "Something";
var valueTwo = "Something else";

valueOne.ShouldBe(valueTwo, () => "Custom Message");

Finally we hit upon the Shouldly library. This assertion library not only solves the code-time issues of strong typing, parameter clarity, and wordiness, it really improves the descriptiveness problem.

Shouldly uses the expression being tested against to create meaningful error messages:

Assert.That(map.IndexOfValue("boo"), Is.EqualTo(2));    // -> Expected 2 but was 1

map.IndexOfValue("boo").ShouldBe(2);                    // -> map.IndexOfValue("boo") should be 2 but was 1

This is even more pronounced when you are comparing collections:

new[] { 1, 2, 3 }.ShouldBe(new[] { 1, 2, 4 });

Produces the following output should be [1, 2, 4] but was [1, 2, 3] difference [1, 2, *3*]

And when comparing strings, not only does it tell you they were different, but provides a visualisation of what was different:

    should be
"this is a longer test sentence"
    but was
"this is a long test sentence"
Difference     |                                |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   
               |                               \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  
Index          | ...  9    10   11   12   13   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28   29   
Expected Value | ...  \s   l    o    n    g    e    r    \s   t    e    s    t    \s   s    e    n    t    e    n    c    e    
Actual Value   | ...  \s   l    o    n    g    \s   t    e    s    t    \s   s    e    n    t    e    n    c    e              


So having seen the design time experience and rich output Shouldly gives you, why would you not use it?

code, net, nunit, testing, shouldly, assert


Visualising NuGet Dependencies

12 Sep 2016

My new place of work has a lot of nuget packages, and I wanted to understand the dependencies between them. To do this I wrote a simple shell script to find all the packages.config files on my machine, and output all the relationships in a way which I could view them.

The format for viewing I use for this is Graphviz’s dot language, and the resulting output can be pasted into WebGraphviz to view.

RESULT_FILE="" # the output file
NAME_MATCH='Microsoft\.' # leave this as a blank string if you want no filtering

echo '' > $RESULT_FILE  # clear out the file
echo 'digraph Dependencies {' >> $RESULT_FILE
echo '  rankdir=LR;' >> $RESULT_FILE # we want a left to right graph, as it's a little easier to read

# find all packages.config, recursively beaneath the path passed into the script
find $1 -iname packages.config | while read line; do

  # find any csproj file next to the packages.config
  project_path="$(dirname $line)/*.csproj"

  # check it exists (e.g. to not error on a /.nuget/packages.config path)
  if [ -f $project_path ]; then

    # find the name of the assembly
    # (our projects are not named with the company prefix, but the assemblies/packages are)
    asm_name=$(grep -oP '<RootNamespace>\K(.*)(?=<)' $project_path)

    # Ignore any tests projects (optional)
    if [[ ${line} != *"Tests"* ]]; then

      # find all lines in the packages.config where the package name has a prefix
      grep -Po "package id=\"\K($NAME_MATCH.*?)(?=\")" $line | while read package; do
        # write it to the result
        echo "  \"$asm_name\" -> \"$package\"" >> $RESULT_FILE



echo '}' >> $RESULT_FILE

To use this, you just need to call it with the path you want to visualise:

$ ./ /d/dev/projects/ledger

Note on the grep usage I am using a non-capturing look behind (everything before \K) and a non-capturing look ahead (the (?=\") part), as if you just use a ‘normal’ expression, the parts which match which I don’t care about also get outputed by grep. In C# I would have written the expression like this:

var packageName = Regex.Match(line, "package id=\"(.*?)\"").Groups[1].Value;

As an example, if I run this over my directory with all of the Ledger code in it, and filter out test dependencies (e.g. remove Shouldy, NSubstitute, Xunit), you get the following dot file:

digraph Dependencies {
  "Ledger.Acceptance" -> "Newtonsoft.Json"
  "Ledger.Tests" -> "Newtonsoft.Json"
  "Ledger.Tests" -> "RabbitMQ.Client"
  "Ledger.Stores.Postgres" -> "Dapper"
  "Ledger.Stores.Postgres" -> "Ledger"
  "Ledger.Stores.Postgres" -> "Newtonsoft.Json"
  "Ledger.Stores.Postgres" -> "Npgsql"
  "Ledger.Stores.Postgres.Tests" -> "Dapper"
  "Ledger.Stores.Postgres.Tests" -> "Ledger"
  "Ledger.Stores.Postgres.Tests" -> "Ledger.Acceptance"
  "Ledger.Stores.Postgres.Tests" -> "Newtonsoft.Json"
  "Ledger.Stores.Postgres.Tests" -> "Npgsql"
  "Ledger.Stores.Fs" -> "Ledger"
  "Ledger.Stores.Fs" -> "Newtonsoft.Json"
  "Ledger.Stores.Fs.Tests" -> "Ledger"
  "Ledger.Stores.Fs.Tests" -> "Ledger.Acceptance"
  "Ledger.Stores.Fs.Tests" -> "Newtonsoft.Json"
  "Ledger.Stores.Fs.Tests" -> "structuremap"

Which renders into the following graph:

Nuget Graph

In the process of writing this though, I did have to go back into the projects and find out why the Ledger.Tests was referencing RabbitMQ.Client (example of appending events to a queue) and why Ledger.Stores.Fs.Tests referened Structuremap (it looks like I forgot to remove the reference after rewriting how Acceptance tests were setup).

The gist with all the code in can be found here:

Hope this is useful to others too!

code, net, nuget, graphviz, dependencies


Preventing MicroService Boilerplate

17 Jul 2016

One of the downsides to microservices I have found is that I end up repeating the same blocks of code over and over for each service. Not only that, but the project setup is repetitive, as all the services use the Single Project Service and Console method.

What do we do in every service?

  • Initialise Serilog.
  • Add a Serilog sink to ElasticSearch for Kibana (but only in non-local config.)
  • Hook/Unhook the AppDomain.Current.UnhandledException handler.
  • Register/UnRegister with Consul.
  • Setup StructureMap, if using an IOC Container.
  • Run as a Console if the Environment.UserInteractive flag is true.
  • Run as a Service otherwise

The only task with potential to have variance each time is the setting up of StructureMap, the rest are almost identical every time.

How to solve all this repetition?

To rectify this, I created a nuget project which encapsulates all of this logic, and allows us to create a Console project with the following startup:

static void Main(string[] args)

This requires one class implementing the IStartup interface, and there are some optional interfaces which can be implemented too:

public class Startup : IStartup, IDisposable
	public Startup()
		Console.WriteLine("starting up");

	public void Execute(ServiceArgs service)
		File.AppendAllLines(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "log.txt"), new[] { "boot!" });

		while (service.CancelRequested == false)

	public void Dispose()
		Console.WriteLine("shutting down");

Optionally, the project can implement two interfaces to control Consul and ElasticSearch configuration:

public class Config : ILogConfig, IConsulRegistration
	public bool EnableKibana { get; }
	public Uri LoggingEndpoint { get; }

	public CatalogRegistration CreateRegistration()
		return new CatalogRegistration() { Service = new AgentService
			Address = "http://localhost",
			Port = 8005,
			Service = "TestService"

	public CatalogDeregistration CreateDeregistration()
		return new CatalogDeregistration { ServiceID = "TestService" };

By implementing these interfaces, the ServiceHost class can use StructureMap to find the implementations (if any) at run time.

Talking of StructureMap, if we wish to configure the container in the host application, all we need to do is create a class which inherits Registry, and the ServiceHost’s StructureMap configuration will find it.

How do we support other tools?

Well we could implment some kind of stage configuration steps, so your startup might change to look like this:

static void Main(string[] args)
	ServiceHost.Stages(new LoggingStage(), new ConsulStage(), new SuperAwesomeThingStage());

The reason I haven’t done this is that on the whole, we tend to use the same tools for each job in every service; StructureMap for IOC, Serilog for logging, Consul for discovery. So rather than having to write some boilerplate for every service (e.g. specifying all the stages), I just bake the options in to ServiceHost directly.

This means that if you want your own version of this library with different tooling support, you need to write it yourself. As a starting point, I have the code for the ServiceContainer project up on Github.

It is not difficult to create new stages for the pipeline - all the different tasks the ServiceHost can perform are implemented in a pseudo Russian-Doll model - they inherit Stage, which looks like this:

public abstract class Stage : IDisposable
	public IContainer Container { get; set; }

	public abstract void Execute();
	public abstract void Dispose();

Anything you want to your stage to do before the IStartup.Execute() call is made is done in Execute(), similarly anything to be done afterwards is in Dispose(). For example, the ConsulStage is implemented like so:

public class ConsulStage : Stage
	public override void Execute()
		var registration = Container.TryGetInstance<IConsulRegistration>();

		if (registration != null)
			var client = new ConsulClient();

	public override void Dispose()
		var registration = Container.TryGetInstance<IConsulRegistration>();

		if (registration != null)
			var client = new ConsulClient();

Finally you just need to add the stage to the ServiceWrapper constructor:

public ServiceWrapper(string name, Type entryPoint)
	// snip...

	_stages = new Stage[]
		new LoggingStage(name),
		new ConsulStage()

Get started!

That’s all there is to it! Hopefully this gives you a good starting point for de-boilerplating your microservices :)

code, net, microservices, consul, structuremap, kibana, boilerplate


Database Integrations for MicroServices

09 Jun 2016

This is a follow up post after seeing Michal Franc’s NDC talk on migrating from Monolithic architectures.

One point raised was that Database Integration points are a terrible idea - and I wholeheartedly agree. However, there can be a number of situations where a Database Integration is the best or only way to achieve the end goal. This can be either technical; say a tool does not support API querying (looking at you SSRS), or cultural; the other team either don’t have the willingness, time, or power to learn how to query an API.

One common situation is a reporting team, who either cannot query an API (e.g. they are stuck using SSRS), or don’t want/have time to learn how to query an API.

There are two ways which can make a Database Integration an altogether less painful prospect, both with a common starting point: A separate login to the Database, with only readonly access to a very small set of tables and views.

Views can be used to create a representation of the service’s data in a manner which makes sense to external systems, for example de-normalising tables, or converting integer based enumerations into their string counterparts.

Tables can be used to expose a transformed version of the service’s data, for example a readmodel from an event stream.

Event Sourcing source data

For example, one of our services uses Event Sourcing. It uses projections to construct readmodels as events are stored (we use the Ledger library, and a SqlServer backend for this.) To provide a Database Integeration point, we have a second set of projections which populate a set of tables specifically for external querying.

If the following event was committed to the store:

  "eventType": "phoneNumberAdded",
  "aggregateID": 231231,
  "number": "01230 232323",
  "type": "home"

The readmodel table, which is just two columns: id:int and json:varchar(max), would get updated to look like this:

id      | json
231231  | {
            "id": 231231,
            "name": "Andy Dote",
            "phones": [
              { "type": "mobile", "number": "0712345646" },
              { "type": "home", "number": "01230 232323" }

The external integration table, which is a denormalised view of the data would get updated to look like this:

id      | name      | home_phone    | mobile_phone
231231  | Andy Dote | 01230 232 323 | 07123 456 456

Non-SQL Systems

While I have not needed to implement this yet, there is a plan for how to do it: a simple regular job which will pull the data from the service’s main store, transform it, and insert it into the SQL store.

Relational Systems

A relational system can be done in a number of ways:

  • In the same manner as the Non-SQL system: with a periodical job
  • In a similar manner to the Event Sourced system: Updating a second table at the same time as the primary tables
  • Using SQL triggers: on insert, add a row to the integration table etc.

I wouldn’t recommend the 3rd option, as you will start ending up with more and more logic living in larger and larger triggers. The important point on all these methods is that the Integration tables are separate from the main tables: you do not want to expose your internal implementation to external consumers.

code, net, microservices, integration, eventsourcing


CQS with Mediatr

19 Mar 2016

This article is some extra thoughts I had on api structure after reading Derek Comartin.

Asides from the benefits that Derek mentions (no fat repositories, thin controllers), there are a number of other advantages that this style of architecture brings.

Ease of Testing

By using Command and Queries, you end up with some very useful seams for writing tests.

For controllers

With controllers, you typically use Dependency injection to provide an instance of IMediator:

public class AddressController : ApiController
    private readonly IMediator _mediator;

    public AddressController(IMediator mediator)
        _mediator = mediator;

    public IEnumerable<Address> Get()
        return _mediator.Send(new GetAllAddressesQuery(User));

You can now test the controller’s actions return as you expect:

public void When_requesting_all_addresses()
  var mediator = Substitute.For<IMediator>();
  var controller = new AddressController(mediator);
  controller.User = Substitute.For<IPrincipal>();

  var result = controller.Get();

      .Send(Arg.Is<GetAllAddressesQuery>(q => q.User == controller.User));

This is also useful when doing integration tests, as you can use Microsoft.Owin.Testing.TestApp to test that all the serialization, content negotiation etc works correctly, and still use a substituted mediator so you have known values to test with:

public async void Addresses_get_should_return_an_empty_json_array()
    var mediator = Substitute.For<IMediator>();

    var server = TestServer.Create(app =>
        var api = new Startup(mediator);

    var response = await _server
        .AddHeader("content-type", "application/json")

    var json = await response.Content.ReadAsStringAsync();


For Handlers

Handler are now isolated from the front end of your application, which means testing is a simple matter of creating an instance, passing in a message, and checking the result. For example the GetAllAddressesQuery handler could be implemented like so:

public class GetAllAddressesQueryHandler : IRequestHandler<GetAllAddressesQuery, IEnumerable<Address>>
    public IEnumerable<Address> Handle(GetAllAddressesQuery message)
        if (message.User == null)
            return Enumerable.Empty<Address>();

        return [] {
            new Address { Line1 = "34 Home Road", PostCode = "BY2 9AX" }

And a test might look like this:

public void When_no_user_is_specified()
    var handler = new GetAllAddressesQueryHandler();
    var result = handler.Handle(new GetAllAddressesQuery());


Multiple Front Ends

The next advantage of using Commmands and Queries is that you can support multiple frontends without code duplication. This ties in very nicely with a Hexagonal architecture. For example, one of my current projects has a set of commands and queries, which are used by a WebApi, and WebSocket connector, and a RabbitMQ adaptor.

This sample also makes use of RabbitHarness, which provides a small interface for easy sending, listening and querying of queues and exchanges.

public RabbitMqConnector(IMediator mediator, IRabbitConnector connector) {
    _mediator = mediator;
    _connector = connector;

    _connector.ListenTo(new QueueDefinition { Name = "AddressQueries" }, OnMessage);

private bool OnMessage(IBasicProperties props, GetAllAddressesQuery message)
    //in this case, the message sent to RabbitMQ matches the query structure
    var addresses = _mediator.Send(message);

        new QueueDefinition { Name = props.ReplyTo },
        replyProps => replyProps.CorrelationID = props.CorrelationID,

Vertical Slicing

This a soft-advantage of Commands and Queries I have found - you can have many more developers working in parallel on a project adding commands and queries etc, before you start treading on each others toes…and the only painful part is all the *.csproj merges you need to do! Your mileage may vary on this one!


In a large project, you can end up with a lot of extra classes, which can be daunting at first - one of my current projects has around 60 IRequest and IRequestHandler implementations. As long as you follow a good naming convention, or sort them in to namespaces, it is not that much of a problem.


Overall I like this pattern a lot - especially as it makes transitioning towards EventSourcing and/or full CQRS much easier.

How about you? What are your thoughts and experiences on this?

code, net, cqs, cqrs, mediatr