MediatR and Magic

07 Jan 2017

Having recently watched Greg Young’s excellent talk on 8 Lines of Code I was thinking about how this kind of thinking applies to the mediator pattern, and specifically the MediatR implementation.

I have written about the advantages of CQRS with MediatR before, but having used it for a long time now, there are some parts which cause friction on a regular basis.

The problems

Discoverability

The biggest issue first. You have a controller with the following constructor:

public AddressController(IMediator mediator)
{
    _mediator = mediator;
}

What messages does it emit? What handlers are used by it? No idea without grepping for _mediator.

Where is the hander for X?

So you have a controller with a method which sends a GetAllAddressesQuery:

public class AddressController : ApiController
{
    public IEnumerable<Address> Get()
    {
        return _mediator.Send(new GetAllAddressesQuery(User));
    }
}

The fastest way to get to the handler definition is to hit Ctrl+T and type in GetAllAddressesQueryHandler. This becomes more problematic on larger codebases when you can end up with many handlers with similar names.

What calls {command|query}Handler?

Given the following handler, what uses it?

public class GetAllAddressesQueryHandler : IRequestHandler<GetAllAddressesQuery, IEnumerable<Address>>
{
    public IEnumerable<Address> Handle(GetAllAddressesQuery message)
    {
        //...
    }
}

With this problem you can use Find Usages on the GetAllAddressesQuery type parameter to find what calls it, so this isn’t so bad at all. The main problem is I am often doing Find Usages on the handler itself, not the message.

Solutions

Discoverability

The team I am on at work felt this problem a lot before I joined, and had decided to role their own mediation pipeline. It works much the same as MediatR, but rather than injecting an IMediator interface into the constructor, you inject interface(s) representing the handler(s) being used:

public AddressController(IGetAllAddressesQueryHandler getHandler, IAddAddressHandler addHandler)
{
    _getHandler = getHandler;
    _addHandler = addHandler;
}

The trade-offs made by this method are:

  • The controllers are now more tightly coupled to the handlers (Handlers are mostly used by 1 controller anyway)
  • We can’t easily do multicast messages (We almost never need to do this)
  • More types are required (the interface) for your handler (so what?)

On the whole, I think this is a pretty good trade-off to be made, we get all the discoverability we wanted, and our controllers and handlers are still testable.

What calls/Where is {command|query}Handler?

This is also solved by the switch to our internal library, but we also augment the change by grouping everything into functionality groups:

Frontend
  Adddress
    AddressController.cs
    GetAllAddressesQuery.cs
    GetAllAddressesQueryHandler.cs
    IGetAllAddressesQueryHandler.cs
  Contact
    ContactController.cs
    ...
  Startup.cs
  project.json

I happen to prefer this structure to a folder for each role (e.g. controllers, messages, handlers), so this is not a hard change to make for me.

Magic

As Greg noted in his video, the second you take in a 3rd party library, it’s code you own (or are responsible for). The changes we have made have really just traded some 3rd party magic for some internal magic. How the handler pipeline gets constructed can be a mystery still (unless you go digging through the library), but it’s a mystery we control.

The important part of this to note is that we felt a pain/friction with how we are working, and decided to change what trade-offs we were making.

What trade-offs are you making? Is it worth changing the deal?

code, net, cqs, cqrs, mediatr

---

Git Aliases

06 Jan 2017

Git is great, but creating some git aliases is a great way to make your usages even more efficient.

To add any of these you can either copy and paste into the [alias] section of your .gitconfig file or run git config --global alias.NAME 'COMMAND' replacing NAME with the alias to use, and COMMAND with what to run.

So without further ado, here are the ones I have created and use on a very regular basis.

Constant usage

  • git s - an alias for git status. Have to save those 5 keypresses!

    s = status
    
  • git cm "some commit message" - shorthand for commit with a message

    cm = commit -m
    
  • git dc - diff files staged for commit

    dc = diff --cached
    
  • git scrub - deletes everything not tracked by git (git clean -dxf) except the packages and node_modules directories

    scrub = clean -dxf --exclude=packages --exclude=node_modules
    

Context switching, rebasing on dirty HEAD

I rebase my changes onto the current branches often, but rebasing requires a clean repository to work on. The following two aliases are used something like this: git save && git pull --rebase && git undo

  • git save - adds and commits everything in the repository, with the commit message SAVEPOINT

    save = !git add -A && git commit -m 'SAVEPOINT'
    
  • git undo - undoes the last commit, leaving everything as it was before committing. Mostly used to undo a git save call

    undo = reset HEAD~1 --mixed
    

I also use these if I need to save my work to work on a bug fix on a different branch.

What have I done?

Often I want commits I have pending, either to the local master, or a remote tracking branch. These both give an output like this:

Git Pending

  • git pending - shows the commits on the current branch compared to the origin/master branch

    pending = log origin/master..HEAD --pretty=oneline --abbrev-commit --format='%Cgreen%cr:%Creset %C(auto)%h%Creset %s'
    
  • git pendingup - shows the commits on the current branch compared to its tracking branch

    pendingup = "!git log origin/\"$(git rev-parse --abbrev-ref HEAD)\"..HEAD --pretty=oneline --abbrev-commit --format='%Cgreen%cr:%Creset %C(auto)%h%Creset %s'"
    

More?

I have some others not documented here, but are in my config repo on Github.

code, git, environment, bash

---

Strong Type All The Configurations

06 Dec 2016

As anyone I work with can attest, I a have been prattling on about strong typing everything for quite a while. One of the places I feel people don’t utilise strong typing enough is in application configuration. This manifests in a number of problems in a codebase.

The Problems

The first problem is when nothing at all is done about it, and you end up with code spattered with this:

var someUrl = new Uri(ConfigurationManager.AppSettings["RemoteService"]);

This itself causes a few problems:

  • Repeated: You have magic strings throughout your codebase
  • Consistency: Was it RemoteService or RemoteServiceUri. Or was it in ConnectionStrings or AppSettings?
  • Visibility: Can you tell which classes require on which (if any) configuration values?
  • Typing: Was it actually a URL? or was it DNS entry?
  • Late errors: You will only find out once that particular piece of code runs
  • Tight Coupling: Tests won’t help either, as they’ll be reading your test’s app.config instead…

Solution: Version 1

The first solution involves abstracting the ConfigurationManager behind a general interface, which can be injected into classes requiring configuration values. The interface is usually along the following lines:

public interface ISettings
{
    string GetString(string key);
    Uri GetUri(string key);
    // GetInt, GetShort, etc.
}

And having an implementation which uses the ConfigurationManager directly:

public class Settings : ISettings
{
    public string GetString(string key) => ConfigurationManager.AppSettings[key];
    public Uri GetUri(string key) => new Uri(ConfigurationManager.AppSettings[key]);
}

This solves one of the problems of direct usage of the ConfigurationManager, namely Tight Coupling. By using an interface we can now use NSubstitute or similar mocking library to disconnect tests from app.config and web.config.

It doesn’t really solve the Typing issue however, as the casting is only done on fetching the configuration value, and so errors in casting still only happen when the code is executed. It also doesn’t really solve the Discoverability issue either - you can now tell if a class requires configuration values, but you cannot tell which values it requires from outside.

The other issues such as Repeatablility, Late Errors and Consistency are not addressed by this method at all.

Solution: Version 2

My preferred method of solving all of these problems is to replace direct usage of ConfigurationManager with an interface & class pair, but with the abstraction being application specific, rather than general. For example, at application might have this as the interface:

public interface IConfiguration
{
    string ApplicationName { get; }
    Uri RemoteHost { get; }
    int TimeoutSeconds { get; }
}

This would then be implemented by a concrete class:

public class Configuration : IConfiguration
{
    public string ApplicationName { get; }
    public Uri RemoteHost { get; }
    public int TimeoutSeconds { get; }

    public Configuration()
    {
        ApplicationName = ConfigurationManager.AppSetting[nameof(ApplicationName)];
        RemoteHost = new Uri(ConfigurationManager.AppSetting[nameof(RemoteHost)]);
        TimeoutSeconds = (int)ConfigurationManager.AppSetting[nameof(TimeoutSeconds)];
    }
}

This method solves all of the first listed problems:

Repeated and Consistency are solved, as the only repetition is the usage of configuration properties themselves. Visibility is solved as you can now either use “Find Usages” on a property, or you can split your configuration interface to have a specific set of properties for each class which is going to need configuration.

Typing and Late errors are solved as all properties are populated on the first creation of the class, and exceptions are thrown immediately if there are any type errors.

Tight Coupling is also solved, as you can fake the entire IConfiguration interface for testing with, or just the properties required for a given test.

The only down side is the amount of writing needed to make the constructor, and having to do the same code in every application you write.

Solution: Version 3

The third solution works exactly as the 2nd solution, but uses the Stronk Nuget library to populate the configuration object. Stronk takes all the heavy lifting out of configuration reading, and works for most cases with zero extra configuration required.

public interface IConfiguration
{
    string ApplicationName { get; }
    Uri RemoteHost { get; }
    int TimeoutSeconds { get; }
}

public class Configuration : IConfiguration
{
    public string ApplicationName { get; }
    public Uri RemoteHost { get; }
    public int TimeoutSeconds { get; }

    public Configuration()
    {
        this.FromAppConfig(); //this.FromWebConfig() works too
    }
}

Stronk supports a lot of customisation. For example, if you wanted to be able to handle populating properties of type MailAddress, you can add it like so:

public Configuration()
{
    var mailConverter = new LambdaValueConverter<MailAddress>(val => new MailAddress(val));
    var options = new StronkOptions();
    options.Add(mailConverter);

    this.FromAppConfig(options);
}

You can also replace (or supplement):

  • How it detects which properties to populate
  • How to populate a property
  • How to pick a value from the configuration source for a given property
  • How to convert a value for a property
  • Where configuration is read from

A few features to come soon:

  • Additional types supported “out of the box” (such as TimeSpan and DateTime)
  • Exception policy controlling:
    • What happens on not being able to find a value in the configuration source
    • What happens on not being able to find a converter
    • What happens on a converter throwing an exception

I hope you find it useful. Stronk’s Source is available on Github, and contributions are welcome :)

code, net, strongtyping, configuration, stronk

---

Shouldly: Why would you assert any other way?

09 Oct 2016

I like to make my development life as easy as possible - and removing small irritations is a great way of doing this. Having used Shouldly in anger for a long time, I have to say I feel a little hamstrung when going back to just using NUnit’s assertions.

I have been known on a couple of projects which use only NUnit assertions, when trying to solve a test failure with array differences, to install Shouldly, fix the test, then remove Shouldly again!

The rest of this post goes through the different assertion models, and how they differ from each other and, eventually, why everyone should be using Shouldly!

The Most Basic

var valueOne = "Something";
var valueTwo = "Something else";

Debug.Assert(valueOne == valueTwo);
Debug.Assert(valueOne == valueTwo, $"{valueOne} should have been {valueTwo}");

This is an assertion at it’s most basic. It will only assert if the condition is false, and optionally you can specify a 2nd parameter with a message.

This has a couple of good points to it. No external dependencies are required, and it is strong typed (as your condition has to compile.) The down sides to this are that it is not very descriptive, and can only be used in Debug compiles (or with the DEBUG constant defined), meaning a Release mode build cannot be tested with this.

This also suffers from the descriptiveness problem - an output from this will only have a message saying an assertion failed, rather than anything helpful in figuring out why an assertion failed.

NUnit’s First Attempt

var valueOne = "Something";
var valueTwo = "Something else";

Assert.AreEqual(valueOne, valueTwo);
Assert.AreEqual(valueOne, valueTwo, $"{valueOne} should have been {valueTwo}");

This improves on the Most Basic version by working in Release mode builds, and as it only depends on the test framework, it doesn’t add a dependency you didn’t already have.

There are two things I dislike about this method: it remains as undescriptive as the first method, and it adds the problem of parameter ambiguity: Which of the two parameters is the expected value, and which is the value under test? You can’t tell without checking the method declaration. While this is a small issue, it can cause headaches when you are trying to debug a test which has started failing, only to discover the assertion being the wrong way around was leading you astray!

NUnit’s Second Attempt

var valueOne = "Something";
var valueTwo = "Something else";

Assert.That(valueOne, Is.EqualTo(valueTwo));
Assert.That(valueOne, Is.EqualTo(valueTwo), $"{valueOne} should have been {valueTwo}");

This is an interesting attempt at readability. On the one hand, it’s very easy to read as a sentence, but it is very wordy, especially if you are wanting to do a Not equals Is.Not.EqualTo(valueTwo).

This biggest problem with this however, is the complete loss of strong typing - both arguments are object. This can trip you up when testing things such as Guids - especially if one of the values gets .ToString() on it at some point:

var id = Guid.NewGuid();
Assert.That(id.ToString(), Is.EqualTo(id));

Not only will this compile, but when the test fails, unless you are paying close attention to the output, it will look like it should’ve passed, as the only difference is the " on either side of one of the values.

Shouldly’s Version

var valueOne = "Something";
var valueTwo = "Something else";

valueOne.ShouldBe(valueTwo);
valueOne.ShouldBe(valueTwo, () => "Custom Message");

Finally we hit upon the Shouldly library. This assertion library not only solves the code-time issues of strong typing, parameter clarity, and wordiness, it really improves the descriptiveness problem.

Shouldly uses the expression being tested against to create meaningful error messages:

//nunit
Assert.That(map.IndexOfValue("boo"), Is.EqualTo(2));    // -> Expected 2 but was 1

//shouldly
map.IndexOfValue("boo").ShouldBe(2);                    // -> map.IndexOfValue("boo") should be 2 but was 1

This is even more pronounced when you are comparing collections:

new[] { 1, 2, 3 }.ShouldBe(new[] { 1, 2, 4 });

Produces the following output

should be
    [1, 2, 4]
but was
    [1, 2, 3]
difference
    [1, 2, *3*]

And when comparing strings, not only does it tell you they were different, but provides a visualisation of what was different:

input
    should be
"this is a longer test sentence"
    but was
"this is a long test sentence"
    difference
Difference     |                                |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |   
               |                               \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  \|/  
Index          | ...  9    10   11   12   13   14   15   16   17   18   19   20   21   22   23   24   25   26   27   28   29   
Expected Value | ...  \s   l    o    n    g    e    r    \s   t    e    s    t    \s   s    e    n    t    e    n    c    e    
Actual Value   | ...  \s   l    o    n    g    \s   t    e    s    t    \s   s    e    n    t    e    n    c    e              

Finishing

So having seen the design time experience and rich output Shouldly gives you, why would you not use it?

code, net, nunit, testing, shouldly, assert

---

Visualising NuGet Dependencies

12 Sep 2016

My new place of work has a lot of nuget packages, and I wanted to understand the dependencies between them. To do this I wrote a simple shell script to find all the packages.config files on my machine, and output all the relationships in a way which I could view them.

The format for viewing I use for this is Graphviz’s dot language, and the resulting output can be pasted into WebGraphviz to view.

RESULT_FILE="graph.dot" # the output file
NAME_MATCH='Microsoft\.' # leave this as a blank string if you want no filtering

echo '' > $RESULT_FILE  # clear out the file
echo 'digraph Dependencies {' >> $RESULT_FILE
echo '  rankdir=LR;' >> $RESULT_FILE # we want a left to right graph, as it's a little easier to read

# find all packages.config, recursively beaneath the path passed into the script
find $1 -iname packages.config | while read line; do

  # find any csproj file next to the packages.config
  project_path="$(dirname $line)/*.csproj"

  # check it exists (e.g. to not error on a /.nuget/packages.config path)
  if [ -f $project_path ]; then

    # find the name of the assembly
    # (our projects are not named with the company prefix, but the assemblies/packages are)
    asm_name=$(grep -oP '<RootNamespace>\K(.*)(?=<)' $project_path)

    # Ignore any tests projects (optional)
    if [[ ${line} != *"Tests"* ]]; then

      # find all lines in the packages.config where the package name has a prefix
      grep -Po "package id=\"\K($NAME_MATCH.*?)(?=\")" $line | while read package; do
        # write it to the result
        echo "  \"$asm_name\" -> \"$package\"" >> $RESULT_FILE
      done

    fi
  fi

done

echo '}' >> $RESULT_FILE

To use this, you just need to call it with the path you want to visualise:

$ ./graph.sh /d/dev/projects/ledger

Note on the grep usage I am using a non-capturing look behind (everything before \K) and a non-capturing look ahead (the (?=\") part), as if you just use a ‘normal’ expression, the parts which match which I don’t care about also get outputed by grep. In C# I would have written the expression like this:

var packageName = Regex.Match(line, "package id=\"(.*?)\"").Groups[1].Value;

As an example, if I run this over my directory with all of the Ledger code in it, and filter out test dependencies (e.g. remove Shouldy, NSubstitute, Xunit), you get the following dot file:


digraph Dependencies {
  rankdir=LR;
  "Ledger.Acceptance" -> "Newtonsoft.Json"
  "Ledger.Tests" -> "Newtonsoft.Json"
  "Ledger.Tests" -> "RabbitMQ.Client"
  "Ledger.Stores.Postgres" -> "Dapper"
  "Ledger.Stores.Postgres" -> "Ledger"
  "Ledger.Stores.Postgres" -> "Newtonsoft.Json"
  "Ledger.Stores.Postgres" -> "Npgsql"
  "Ledger.Stores.Postgres.Tests" -> "Dapper"
  "Ledger.Stores.Postgres.Tests" -> "Ledger"
  "Ledger.Stores.Postgres.Tests" -> "Ledger.Acceptance"
  "Ledger.Stores.Postgres.Tests" -> "Newtonsoft.Json"
  "Ledger.Stores.Postgres.Tests" -> "Npgsql"
  "Ledger.Stores.Fs" -> "Ledger"
  "Ledger.Stores.Fs" -> "Newtonsoft.Json"
  "Ledger.Stores.Fs.Tests" -> "Ledger"
  "Ledger.Stores.Fs.Tests" -> "Ledger.Acceptance"
  "Ledger.Stores.Fs.Tests" -> "Newtonsoft.Json"
  "Ledger.Stores.Fs.Tests" -> "structuremap"
}

Which renders into the following graph:

Nuget Graph

In the process of writing this though, I did have to go back into the projects and find out why the Ledger.Tests was referencing RabbitMQ.Client (example of appending events to a queue) and why Ledger.Stores.Fs.Tests referened Structuremap (it looks like I forgot to remove the reference after rewriting how Acceptance tests were setup).

The gist with all the code in can be found here: graph.sh.

Hope this is useful to others too!

code, net, nuget, graphviz, dependencies

---