Composite Decorators with StructureMap

04 Oct 2017

While I was developing my Crispin project, I ended up needing to create a bunch of implementations of a single interface, and then use all those implementations at once (for metrics logging).

The interface looks like so:

public interface IStatisticsWriter
{
    Task WriteCount(string format, params object[] parameters);
}

And we have a few implementations already:

  • LoggingStatisticsWriter - writes to an ILogger instance
  • StatsdStatisticsWriter - pushes metrics to StatsD
  • InternalStatisticsWriter - aggregates metrics for exposing via Crispin’s api

To make all of these be used together, I created a fourth implementation, called CompositeStatisticsWriter (a name I made up, but apparently matches the Gang of Four definition of a composite!)

public class CompositeStatisticsWriter : IStatisticsWriter
{
    private readonly IStatisticsWriter[] _writers;

    public CompositeStatisticsWriter(IEnumerable<IStatisticsWriter> writers)
    {
        _writers = writers.ToArray();
    }

    public async Task WriteCount(string format, params object[] parameters)
    {
        await Task.WhenAll(_writers
            .Select(writer => writer.WriteCount(format, parameters))
            .ToArray());
    }
}

The problem with doing this is that StructureMap throws an error about a bi-directional dependency:

StructureMap.Building.StructureMapBuildException : Bi-directional dependency relationship detected!
Check the StructureMap stacktrace below:
1.) Instance of Crispin.Infrastructure.Statistics.IStatisticsWriter (Crispin.Infrastructure.Statistics.CompositeStatisticsWriter)
2.) All registered children for IEnumerable<IStatisticsWriter>
3.) Instance of IEnumerable<IStatisticsWriter>
4.) new CompositeStatisticsWriter(*Default of IEnumerable<IStatisticsWriter>*)
5.) Crispin.Infrastructure.Statistics.CompositeStatisticsWriter
6.) Instance of Crispin.Infrastructure.Statistics.IStatisticsWriter (Crispin.Infrastructure.Statistics.CompositeStatisticsWriter)
7.) Container.GetInstance<Crispin.Infrastructure.Statistics.IStatisticsWriter>()

After attempting to solve this myself in a few different ways (you can even watch the stream of my attempts), I asked in the StructreMap gitter chat room, and received this answer:

This has come up a couple times, and yeah, you’ll either need a custom convention or a policy that adds the other ITest’s to the instance for CompositeTest as inline dependencies so it doesn’t try to make Composite a dependency of itself – Jeremy D. Miller

Finally, Babu Annamalai provided a simple implementation when I got stuck (again).

The result is the creation of a custom convention for registering the composite, which provides all the implementations I want it to wrap:

public class CompositeDecorator<TComposite, TDependents> : IRegistrationConvention
    where TComposite : TDependents
{
    public void ScanTypes(TypeSet types, Registry registry)
    {
        var dependents = types
            .FindTypes(TypeClassification.Concretes)
            .Where(t => t.CanBeCastTo<TDependents>() && t.HasConstructors())
            .Where(t => t != typeof(TComposite))
            .ToList();

        registry
            .For<TDependents>()
            .Use<TComposite>()
            .EnumerableOf<TDependents>()
            .Contains(x => dependents.ForEach(t => x.Type(t)));
    }
}

To use this the StructureMap configuration changes from this:

public CrispinRestRegistry()
{
    Scan(a =>
    {
        a.AssemblyContainingType<Toggle>();
        a.WithDefaultConventions();
        a.AddAllTypesOf<IStatisticsWriter>();
    });

    var store = BuildStorage();

    For<IStorage>().Use(store);
    For<IStatisticsWriter>().Use<CompositeStatisticsWriter>();
}

To this version:

public CrispinRestRegistry()
{
    Scan(a =>
    {
        a.AssemblyContainingType<Toggle>();
        a.WithDefaultConventions();
        a.Convention<CompositeDecorator<CompositeStatisticsWriter, IStatisticsWriter>>();
    });

    var store = BuildStorage();
    For<IStorage>().Use(store);
}

And now everything works successfully, and I have Pull Request open on StructureMap’s repo with an update to the documentation about this.

Hopefully this helps someone else too!

code, structuremap, di, ioc

---

Integration Testing with Dotnet Core, Docker and RabbitMQ

02 Oct 2017

When building libraries, not only is it a good idea to have a large suite of Unit Tests, but also a suite of Integration Tests.

For one of my libraries (RabbitHarness) I have a set of tests which check it behaves as expected against a real instance of RabbitMQ. Ideally these tests will always be run, but sometimes RabbitMQ just isn’t available such as when running on AppVeyor builds, or if I haven’t started my local RabbitMQ Docker container.

Skipping tests if RabbitMQ is not available

First off, I prevent the tests from running if RabbitMQ is not available by using a custom XUnit FactAttribute:

public class RequiresRabbitFactAttribute : FactAttribute
{
	private static readonly Lazy<bool> IsAvailable = new Lazy<bool>(() =>
	{
		var factory = new ConnectionFactory { HostName = "localhost", RequestedConnectionTimeout = 1000 };

		try
		{
			using (var connection = factory.CreateConnection())
				return connection.IsOpen;
		}
		catch (Exception)
		{
			return false;
		}
	});

	public override string Skip
	{
		get { return IsAvailable.Value ? "" : "RabbitMQ is not available";  }
		set { /* nothing */ }
	}
}

This attribute will try connecting to a RabbitMQ instance on localhost once for all tests per run, and cause any test with this attribute to be skipped if RabbitMQ is not available.

Build Script & Docker

I decided the build script should start a RabbitMQ container, and use that for the tests, but I didn’t want to re-use my standard RabbitMQ instance which I use for all kinds of things, and may well be broken at any given time.

As my build script is just a bash script, I can check if the docker command is available, and then start a container if it is (relying on the assumption that if docker is available, I can start a container).

if [ -x "$(command -v docker)" ]; then
  CONTAINER=$(docker run -d --rm -p 5672:5672 rabbitmq:3.6.11-alpine)
  echo "Started RabbitMQ container: $CONTAINER"
fi

If docker is available, we start a new container. I use rabbitmq:3.6.11-alpine as it is a tiny image, with no frills, and also start it with the -d and --rm flags, which starts the container in a disconnected mode (e.g. the docker run command returns instantly), and will delete the container when it is stopped, taking care of clean up for us! I only bother binding the main data connection port (5672), as that is all we are going to be using. Finally, the container’s ID, which is returned by the docker run command, is stored in the CONTAINER variable.

I recommend putting this step as the very first part of your build script, as it gives the container time to start up RabbitMQ and be ready for connections while your build is running. Otherwise I found I was needing to put a sleep 5 command in afterwards to pause the script for a short time.

The script then continues on with the normal build process:

dotnet restore "$NAME.sln"
dotnet build "$NAME.sln" --configuration $MODE

find . -iname "*.Tests.csproj" -type f -exec dotnet test "{}" --configuration $MODE \;
dotnet pack ./src/$NAME --configuration $MODE --output ../../.build

Once this is all done, I have another check that docker exists, and stop the container we started earlier, by using the container ID in CONTAINER:

if [ -x "$(command -v docker)" ]; then
  docker stop $CONTAINER
fi

And that’s it! You can see the full build script for RabbitHarness here.

The only problem with this script is if you try and start a RabbitMQ container while you already have one running, the command will fail, but the build should succeed anyway as the running instance of RabbitMQ will work for the tests, and the docker stop command will just output that it can’t find a container with a blank ID.

I think I will be using this technique more to help provide isolation for builds - I think that the Microsoft/mssql-server-linux containers might be very useful for some of our work codebases (which do work against the Linux instances of MSSQL, even if they weren’t designed to!)

code, dotnetcore, rabbitmq, docker, testing

---

Implementing Custom Aspnet Core ModelBinders

22 Sep 2017

This post is a summary of a stream I did last night where I implemented all of this. If you want to watch me grumble my way through it, it’s available on YouTube here.

In my Crispin project, I wanted the ability to support loading Toggles by both name and ID, for all operations. As I use mediator to send messages from my controllers to the handlers in the domain, this means that I had to either:

  • create separate request types for loading by name and loading by id
  • have both an ID and Name property on each method

I didn’t like the sound of either of these as both involve more typing than I want to do, and the second variant has the added downside of causing a lot of if statements in the handlers, as you have to work out which is set before loading. Not to mention the duplication of the load toggle logic in every handler.

The solution I came up with was to use some inheritance, a static factory, some method hiding, and a custom IModelBinder.

ToggleLocator

I started off by having an abstract base class called ToggleLocator. To start with, it just has two static methods for creating an instance of ToggleLocator:

public abstract class ToggleLocator
{
	public static ToggleLocator Create(Guid toggleID) => new ToggleLocatorByID(toggleID);
	public static ToggleLocator Create(string toggleName) => new ToggleLocatorByName(toggleName);
}

As this is going to be used in both Query handlers and Command handlers, I need to be able to load the Toggle (the EventSourced AggregateRoot), and the ToggleView (the projected current state of the AggregateRoot). So we add two abstract methods to the ToggleLocator

internal abstract ToggleView LocateView(IStorageSession session);
internal abstract Toggle LocateAggregate(IStorageSession session);

Note that not only are these two methods abstract, they are also internal - we don’t want anything outside the domain to know about how a toggle is loaded. I was considering using an privately implemented interface to do this method hiding, but didn’t see the point as I can acomplish the same using the internal methods.

We can now write two implementations of the ToggleLocator. First up is the ToggleLocatorByID, which is very straight forward to implement; we use the ID to load the AggregateRoot directly, and the AllToggles view can be queried by ID to fetch the view version also.

public class ToggleLocatorByID : ToggleLocator
{
	private readonly ToggleID _toggleID;

	public ToggleLocatorByID(ToggleID toggleID)
	{
		_toggleID = toggleID;
	}

	internal override ToggleView LocateView(IStorageSession session) => session
		.LoadProjection<AllToggles>()
		.Toggles
		.SingleOrDefault(view => view.ID == _toggleID);

	internal override Toggle LocateAggregate(IStorageSession session) => session
		.LoadAggregate<Toggle>(_toggleID);
}

The more interesting class to implement is ToggleLocatorByName, as this needs to be able to load an AggregateRoot by name; something which is not directly supported. So to do this we fetch the ToggleView first, and then use the ID property so we can load the Toggle:

public class ToggleLocatorByName : ToggleLocator
{
	private readonly string _toggleName;

	public ToggleLocatorByName(string toggleName)
	{
		_toggleName = toggleName;
	}

	internal override ToggleView LocateView(IStorageSession session) => session
		.LoadProjection<AllToggles>()
		.Toggles
		.SingleOrDefault(t => t.Name.Equals(_toggleName, StringComparison.OrdinalIgnoreCase));

	internal override Toggle LocateAggregate(IStorageSession session)
	{
		var view = LocateView(session);

		return view != null
			? session.LoadAggregate<Toggle>(view.ID)
			: null;
	}
}

All this means that the handlers have no conditionals for loading, they just call the relevant .Locate method:

private Task<UpdateToggleTagsResponse> ModifyTags(ToggleLocator locator, Action<Toggle> modify)
{
	using (var session = _storage.BeginSession())
	{
		var toggle = locator.LocateAggregate(session);
		//or
		var view  = locator.LocateView(session);
		//...
	}
}

And in the controllers, we have separate action methods for each route:

[Route("name/{toggleName}/tags/{tagName}")]
[HttpPut]
public async Task<IActionResult> PutTag(string toggleName, string tagName)
{
	var request = new AddToggleTagRequest(ToggleLocator.Create(toggleName), tagName);
	var response = await _mediator.Send(request);

	return new JsonResult(response.Tags);
}

[Route("id/{toggleID}/tags/{tagName}")]
[HttpPut]
public async Task<IActionResult> PutTag(Guid toggleID, string tagName)
{
	var request = new AddToggleTagRequest(ToggleLocator.Create(ToggleID.Parse(toggleID)), tagName);
	var response = await _mediator.Send(request);

	return new JsonResult(response.Tags);
}

But that is still more duplication than I would like, so lets see if we can resolve this with a custom IModelBinder.

Custom IModelBinder for ToggleLocator

To make a custom model binder, we need to implement two interfaces: IModelBinderProvider and IModelBinder. I am not sure why IModelBinderProvider exists to be perfectly honest, but you need it, and as it is doing nothing particularly interesting, I decided to implement both interfaces in the one class, and just return this from IModelBinderProvider.GetBinder:

public class ToggleLocatorBinder : IModelBinderProvider
{
	public IModelBinder GetBinder(ModelBinderProviderContext context)
	{
		if (context.Metadata.ModelType == typeof(ToggleLocator))
			return this;

		return null;
	}
}

We can then implement the second interface, IModelBinder. Here we check (again) that the parameter is a ToggleLocator, fetch the value which came from the route (or querystring, thanks to the .ValueProvider property).

All I need to do here is try and parse the value as a Guid. If it parses successfully, we create a ToggleLocatorByID instance, otherwise create a ToggleLocatorByName instance.

public class ToggleLocatorBinder : IModelBinderProvider, IModelBinder
{
	public Task BindModelAsync(ModelBindingContext bindingContext)
	{
		if (bindingContext.ModelType != typeof(ToggleLocator))
			return Task.CompletedTask;

		var value = bindingContext.ValueProvider.GetValue(bindingContext.FieldName);
		var guid = Guid.Empty;

		var locator = Guid.TryParse(value.FirstValue, out guid)
			? ToggleLocator.Create(ToggleID.Parse(guid))
			: ToggleLocator.Create(value.FirstValue);

		bindingContext.Result = ModelBindingResult.Success(locator);

		return Task.CompletedTask;
	}
}

We add this into our MVC registration code at the beginning of the ModelBinderProviders collection, as MVC will use the first binder which can support the target type, and there is a binder in the collection somewhere which will handle anything which inherits object…

services.AddMvc(options =>
{
	options.ModelBinderProviders.Insert(0, new ToggleLocatorBinder());
});

Now we can reduce our action methods down to one which handles both routes:

[Route("id/{id}/tags/{tagName}")]
[Route("name/{id}/tags/{tagName}")]
[HttpPut]
public async Task<IActionResult> PutTag(ToggleLocator id, string tagName)
{
	var request = new AddToggleTagRequest(id, tagName);
	var response = await _mediator.Send(request);

	return new JsonResult(response.Tags);
}

Much better, no duplication, and no (obvious) if statements!

design, code, aspnetcore

---

Testing Containers or Test Behaviour, Not Implementation

17 Sep 2017

The trouble with testing containers is that usually the test ends up very tightly coupled to the implementation.

Let’s see an example. If we start off with an interface and implementation of a “cache”, which in this case is just going to store a single string value.

public interface ICache
{
    string Value { get; set; }
}

public class Cache
{
    public string Value { get; set; }
}

We then setup our container (StructureMap in this case) to return the same instance of the cache whenever an ICache is requested:

var container = new Container(_ =>
{
    _.For<ICache>().Use<Cache>().Singleton();
});

The following test is fairly typical of how this behaviour gets verified - it just compares that the same instance was returned by the container:

var first = container.GetInstance<ICache>();
var second = container.GetInstance<ICache>();

first.ShouldBe(second);

But this is a very brittle test, as it is assuming that ICache will actually be the singleton. However in the future, we might add in a decorator, or make the cache a totally different style of implementation which isn’t singleton based.

For example, if we were to include a decorator class, which just logs reads and writes to the console:

public class LoggingCache : ICache
{
    private readonly Cache _backingCache;

    public LoggingCache(Cache backingCache)
    {
        _backingCache = backingCache;
    }

    public string Value
    {
        get
        {
            Console.WriteLine("Value fetched");
            return _backingCache.Value;
        }
        set
        {
            Console.Write($"Value changed from {_backingCache.Value} to {value}");
            _backingCache.Value = value;
        }
    }
}

Which will change our container registration:

var container = new Container(_ => {
    _.ForSingletonOf<Cache>();
    _.For<ICache>().Use<LoggingCache>();
});

The test will now fail, or need changing to match the new implementation. This shows two things:

  • Tests are tightly coupled to the implementation
  • Tests are testing the implementation, not the intent.

Testing intent, not implementation

Instead of checking if we get the same class instances back from the container, it would make for more sense to check the classes behave as expected. For my “super stupid cache” example this could take the following form:

var first = container.GetInstance<ICache>();
var second = container.GetInstance<ICache>();

first.Value = "testing";
second.Value.ShouldBe("testing");

Not only does this test validate the behaviour of the classes, but it is far less brittle - we can change what the container returns entirely for ICache, as long as it behaves the same.

But what do you think? How do you go about testing behaviour?

design, code, structuremap, testing

---

Repositories Revisited (and why CQRS is better)

09 Sep 2017

TLDR: I still don’t like Repositories!

Recently I had a discussion with a commenter on my The problems with, and solutions to Repositories post, and felt it was worth expanding on how I don’t use repositories.

My applications tend to use the mediator pattern to keep things decoupled (using the Mediatr library), and this means that I end up with “handler” classes which process messages; they load something from storage, call domain methods, and then write it back to storage, possibly returning some or all the data.

For example you could implement a handler to update the tags on a toggle class like so:

public class UpdateToggleTagsHandler : IAsyncRequestHandler<UpdateToggleTagsRequest, UpdateToggleTagsResponse>
{
    private readonly GetToggleQuery _getToggle;
    private readonly SaveToggleCommand _saveToggle;

    public UpdateToggleTagsHandler(GetToggleQuery getToggle, SaveToggleCommand saveToggle)
    {
        _getToggle = getToggle;
        _saveToggle = saveToggle;
    }

    public async Task<UpdateToggleTagsResponse> Handle(UpdateToggleTagsRequest message)
    {
        var toggle = await _getToggle.Execute(message.ToggleID);

        toggle.AddTag(message.Tag);

        await _saveToggle(toggle);

        return new UpdateToggleTagsResponse
        {
            Tags = toggle.Tags.ToArray()
        };
    }
}

Note how we use constructor injection to get a single command and a single query, and that the business logic is contained within the Toggle class itself, not the Handler.

By depending on commands and queries rather than using a repository, we can see at a glance what the UpdateToggleTagsHandler requires in the way of data, rather than having to pick through the code and figure out which of 20 methods on a repository is actually being called.

The actual domain classes (in this case, the Toggle class) know nothing of storage concerns. As I use EventSourcing a lot, the domain classes just need a few methods to facilitate storage: applying events, fetching pending events, and clearing pending events. For non EventSourced classes, I tend to use the Memento pattern: each class implements two methods, one to load from a plain object, one to write to the same plain object.

If your handler starts needing many commands or many queries passed in, it’s a pretty good indication that your design has a weakness which will probably need refactoring. This is harder to notice when using repositories as you might still only have a single constructor parameter, but be calling tens of methods on it.

Hopefully this provides a bit more reasoning behind my dislike of repositories, and how I try to implement alternatives.

design, code, cqrs, architecture

---