CQS with Mediatr

19 Mar 2016

This article is some extra thoughts I had on api structure after reading Derek Comartin.

Asides from the benefits that Derek mentions (no fat repositories, thin controllers), there are a number of other advantages that this style of architecture brings.

Ease of Testing

By using Command and Queries, you end up with some very useful seams for writing tests.

For controllers

With controllers, you typically use Dependency injection to provide an instance of IMediator:

public class AddressController : ApiController
{
    private readonly IMediator _mediator;

    public AddressController(IMediator mediator)
    {
        _mediator = mediator;
    }

    public IEnumerable<Address> Get()
    {
        return _mediator.Send(new GetAllAddressesQuery(User));
    }
}

You can now test the controller's actions return as you expect:

[Fact]
public void When_requesting_all_addresses()
{
  var mediator = Substitute.For<IMediator>();
  var controller = new AddressController(mediator);
  controller.User = Substitute.For<IPrincipal>();

  var result = controller.Get();

  mediator
      .Received(1)
      .Send(Arg.Is<GetAllAddressesQuery>(q => q.User == controller.User));
}

This is also useful when doing integration tests, as you can use Microsoft.Owin.Testing.TestApp to test that all the serialization, content negotiation etc works correctly, and still use a substituted mediator so you have known values to test with:

[Fact]
public async void Addresses_get_should_return_an_empty_json_array()
{
    var mediator = Substitute.For<IMediator>();
    mediator.Send(Arg.Any<GetAllAddressesQuery>()).Returns(Enumerable.Empty<Address>());

    var server = TestServer.Create(app =>
    {
        var api = new Startup(mediator);
        api.Configuration(app);
    });

    var response = await _server
        .CreateRequest("/api/address")
        .AddHeader("content-type", "application/json")
        .GetAsync();

    var json = await response.Content.ReadAsStringAsync();

    json.ShouldBe("[]");
}

For Handlers

Handler are now isolated from the front end of your application, which means testing is a simple matter of creating an instance, passing in a message, and checking the result. For example the GetAllAddressesQuery handler could be implemented like so:

public class GetAllAddressesQueryHandler : IRequestHandler<GetAllAddressesQuery, IEnumerable<Address>>
{
    public IEnumerable<Address> Handle(GetAllAddressesQuery message)
    {
        if (message.User == null)
            return Enumerable.Empty<Address>();

        return [] {
            new Address { Line1 = "34 Home Road", PostCode = "BY2 9AX" }
        };
    }
}

And a test might look like this:

[Fact]
public void When_no_user_is_specified()
{
    var handler = new GetAllAddressesQueryHandler();
    var result = handler.Handle(new GetAllAddressesQuery());

    result.ShouldBeEmpty();
}

Multiple Front Ends

The next advantage of using Commmands and Queries is that you can support multiple frontends without code duplication. This ties in very nicely with a Hexagonal architecture. For example, one of my current projects has a set of commands and queries, which are used by a WebApi, and WebSocket connector, and a RabbitMQ adaptor.

This sample also makes use of RabbitHarness, which provides a small interface for easy sending, listening and querying of queues and exchanges.

public RabbitMqConnector(IMediator mediator, IRabbitConnector connector) {
    _mediator = mediator;
    _connector = connector;

    _connector.ListenTo(new QueueDefinition { Name = "AddressQueries" }, OnMessage);
}

private bool OnMessage(IBasicProperties props, GetAllAddressesQuery message)
{
    //in this case, the message sent to RabbitMQ matches the query structure
    var addresses = _mediator.Send(message);

    _connector.SendTo(
        new QueueDefinition { Name = props.ReplyTo },
        replyProps => replyProps.CorrelationID = props.CorrelationID,
        addresses
    );
}

Vertical Slicing

This a soft-advantage of Commands and Queries I have found - you can have many more developers working in parallel on a project adding commands and queries etc, before you start treading on each others toes...and the only painful part is all the *.csproj merges you need to do! Your mileage may vary on this one!

Disadvantages

In a large project, you can end up with a lot of extra classes, which can be daunting at first - one of my current projects has around 60 IRequest and IRequestHandler implementations. As long as you follow a good naming convention, or sort them in to namespaces, it is not that much of a problem.

Overall

Overall I like this pattern a lot - especially as it makes transitioning towards EventSourcing and/or full CQRS much easier.

How about you? What are your thoughts and experiences on this?

code, net, cqs, cqrs, mediatr

---

RabbitMQ integration tests in XUnit

18 Mar 2016

Quite a number of my projects involve talking to RabbitMQ, and to help check things work as expected, I often have a number of integration tests which talk to a local RabbitMQ instance.

While this is fine for tests being run locally, it does cause problems with the build servers - we don't want to install RabbitMQ on there, and we don't typically want the build to be dependent on RabbitMQ.

To solve this I created a replacement FactAttribute which can check if RabbitMQ is available, and skip tests if it is not.

This attribute works with a single host, and will only check for the host actually being there on its first connection.

public class RequiresRabbitFactAttribute : FactAttribute
{
  private static bool? _isAvailable;

  public RequiresRabbitFactAttribute(string host)
  {
    if (_isAvailable.HasValue == false)
      _isAvailable = CheckHost(host);

    if (_isAvailable == false)
      Skip = $"RabbitMQ is not available on {host}.";
  }

  private static bool CheckHost(string host)
  {
    var factory = new ConnectionFactory
    {
      HostName = host,
      RequestedConnectionTimeout = 1000;
    };

    try
    {
      using (var connection = factory.CreateConnection())
      {
        return connection.IsOpen;
      }
    }
    catch (Exception)
    {
      return false;
    }
  }

}

I was planning on using a dictionary, keyed by host to store the availability, but realized that I always use the same host throughout a test suite.

The reason for passing the host name in via the ctor rather than using a constant is that this usually resides within a generic "rabbitmq helpers" type assembly, and is used in multiple projects.

code, net, rabbitmq, xunit

---

Generating AssemblyInfo files with Gulp

19 Nov 2015

When changing a project's build script over to Gulpjs, I ran into a problem with one step - creating an AssemblyInfo.cs file.

My projects have their version number in the package.json file, and I read that at compile time, pull in some information from the build server, and write that to an AssemblyVersion.cs file. This file is not tracked by git, and I don't want it showing up as a modification if you run the build script locally.

The problem is that the gulp-dotnet-assembly-info package doesn't support generation of files, only updating. To get around this I used the gulp-rename package to read a template file, and generate the non-tracked AssemblyVersion.cs file.

Steps

First, create an AssemblyVersion.base file, and save it somewhere in your repository. I usually put it next to the gulpfile, or in the projects Properties directory, depending on if the project has multiple assemblies or not. This file can be added and tracked by git - it won't get changed.

using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;

[assembly: AssemblyVersion("0.0.0")]
[assembly: AssemblyFileVersion("0.0.0")]
[assembly: AssemblyDescription("Build: 0, Commit Sha: 0")]

Next install the two gulp modules, and import into your gulpfile:

npm install gulp-rename --save
npm install gulp-dotnet-assembly-info --save
var rename = require('gulp-rename');
var assemblyInfo = require('gulp-dotnet-assembly-info');

In the gulp file, read the package.json file and the environment variables. I do this once at the begining of my gulpfile and use the config all over the place.

var project = JSON.parse(fs.readFileSync("./package.json"));

var config = {
  name: project.name,
  version: project.version,
  commit: process.env.APPVEYOR_REPO_COMMIT || "0",
  buildNumber: process.env.APPVEYOR_BUILD_VERSION || "0",
}

Then add a task to create a new AssemblyVersion.cs file. Change the src parameter to match where you saved the AssemblyVersion.base file.

gulp.task('version', function() {
  return gulp
    .src(config.name + '/Properties/AssemblyVersion.base')
    .pipe(rename("AssemblyVersion.cs"))
    .pipe(assemblyInfo({
      version: config.version,
      fileVersion: config.version,
      description: "Build: " +  config.buildNumber + ", Sha: " + config.commit
    }))
    .pipe(gulp.dest('./' + config.name + '/Properties'));
});

Don't forget to reference the AssemblyVersion.cs file in your csproj!

You can see a full gulpfile with this in here: Magistrate gulpfile.

code, net, gulp

---

Posting PlainText to Asp WebApi

21 Sep 2015

Recently I have been writing a WebApi project which needs to accept plaintext via the body of a PUT request, and did the logical thing of using the FromBodyAttribute

public HttpStatusCode PutKv([FromBody]string content, string keyGreedy)
{
  return HttpStatusCode.OK;
}

Which didn't work, with the useful error message of "Unsupported media type."

It turns out that to bind a value type with the FromBody attribute, you have to prefix the body of your request with an =. As I am emulating another Api's interface, this is not an option, so I set about figuring out how to override this requirement.

In the end I discovered that providing a new MediaTypeFormatter which handles plaintext is the answer:

public class PlainTextMediaTypeFormatter : MediaTypeFormatter
{
  public PlainTextMediaTypeFormatter()
  {
    SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/plain"));
  }

  public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger)
  {
    var source = new TaskCompletionSource<object>();

    try
    {
      using (var memoryStream = new MemoryStream())
      {
        readStream.CopyTo(memoryStream);
        var text = Encoding.UTF8.GetString(memoryStream.ToArray());
        source.SetResult(text);
      }
    }
    catch (Exception e)
    {
      source.SetException(e);
    }

    return source.Task;
  }

  public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, System.Net.TransportContext transportContext, System.Threading.CancellationToken cancellationToken)
  {
    var bytes = Encoding.UTF8.GetBytes(value.ToString());
    return writeStream.WriteAsync(bytes, 0, bytes.Length, cancellationToken);
  }

  public override bool CanReadType(Type type)
  {
    return type == typeof(string);
  }

  public override bool CanWriteType(Type type)
  {
    return type == typeof(string);
  }
}

This can then be added to the config.Formatters collection:

public static class WebApiConfig
{
  public static void Register(HttpConfiguration http)
  {
    http.Formatters.Add(new PlainTextMediaTypeFormatter());
  }
}

It really seems like something which should be supplied out of the box with WebApi to me, but at least it wasn't as complicated to implement as I was expecting it to be :)

code, net, webapi

---

Running pre-compiled microservices in Docker with Mono

15 Sep 2015

Last time we went through creating a Dockerfile for a microservice, with the service being compiled on creation of the container image, using xbuild.

However we might not want to compile the application to create the container image, and use an existing version (e.g. one created by a build server.)

Our original Dockerfile was this:

FROM mono:3.10-onbuild
RUN apt-get update && apt-get install mono-4.0-service -y

CMD [ "mono-service",  "./MicroServiceDemo.exe", "--no-daemon" ]
EXPOSE 12345

We only need to make a few modifications to use a pre-compiled application:

FROM mono:3.10.0
RUN apt-get update && apt-get install mono-4.0-service -y

RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app

CMD [ "mono-service",  "./MicroServiceDemo.exe", "--no-daemon" ]
EXPOSE 12345

Asides from changing the base image to mono:3.10.0, the only changes made are to add the following lines:

RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app

These lines create a new directory for our application, copy the contents of the current directory (e.g. the paths specified when you type docker build -t servicedemo .) and make the directory our working directory.

You can now create a container with the same commands as last time:

docker build -t servicedemo .
docker run -d -p 12345:12345 --name demo servicedemo

There is a demo project for all of this on my github: DockerMonoDemo.

design, code, microservices, docker, mono

---