Serilog LogContext with StructureMap and SimpleInjector

28 Jul 2017

This article has been updated after feedback from .Net Junkie (Godfather of SimpleInjector). I now have a working SimpleInjector implementation of this, and am very appreciative of him for taking the time to help me :)

Serilog is one of the main set of libraries I use on a regular basis, and while it is great at logging, it does cause something in our codebase that I am less happy about. Take the following snippet for example:

public class Something
    private static readonly ILogger Log = Log.ForContext(typeof(Something));

There are two things I don’t like about this. The first is the static field access: We have tests which assert on log content for disallowed information, or to include a correlationid etc. Having a static field means that if tests run in parallel, we end up with flaky tests due to multiple log messages being written. The second thing I don’t like is less about the line itself, but the repetition of this line throughout the codebase. Nearly every class which does logging has the same line, but with the type parameter changed.

I set out to see if I could remedy both problems at once.

Fixing the Static Field

The first fix is to inject the logger in via a constructor argument, which will allow tests to use their own version of the logger:

public class Something
    private readonly ILogger _log;

    public Something(ILogger logger)
        _log = logger.ForContext(typeof(Something));

That was easy! Now on to the hard part; removing the repeated .ForContext call.

Fixing the ForContext Repetition

Most (if not all) the applications I build use a dependency injection container to build objects. In my opinion there are only two containers which are worth considering in the .net space: StructureMap, and SimpleInjector. If you like convention based registration, use StructureMap. If you like to get a safety net that prevents and detects common misconfigurations, use SimpleInjector.


We can use the same tests to verify the behaviour both when using StructureMap and SimpleInjector’s. We have a couple of test classes, and an interface to allow for more generic testing:

private interface ILogOwner
    ILogger Logger { get; }

private class Something : ILogOwner
    public ILogger Logger { get; }

    public Something(ILogger logger)
        Logger = logger;

private class Everything : ILogOwner
    public ILogger Logger { get; }

    public Everything(ILogger logger)
        Logger = logger;

And then a single parameterised test method for verification:

public class Tests
    private readonly Container _container;

    public Tests()
        Log.Logger = new LoggerConfiguration()

        // _container = new ...

    public void Types_get_their_own_context(Type type)
        var instance = (ILogOwner)_container.GetInstance(type);
        var context = GetContextFromLogger(instance);


    private static string GetContextFromLogger(ILogOwner owner)
        var logEvent = CreateLogEvent();
        return logEvent.Properties["SourceContext"].ToString().Trim('"');

    private static LogEvent CreateLogEvent() => new LogEvent(
        new MessageTemplate("", Enumerable.Empty<MessageTemplateToken>()),


The StructureMap initialisation just requires a single line change to use the construction context when creating a logger:

_container = new Container(_ =>
    _.Scan(a =>

    // original:
    // _.For<ILogger>().Use(context => Log.Logger);

    // contextual
    _.For<ILogger>().Use(context => Log.ForContext(context.ParentType));


SimpleInjector does a lot of verification of your container configuration, and as such deals mostly with Types, rather than instances, or types which have multiple mappings as we are doing. This makes it slightly harder to support the behaviour we had with StructureMap, but not impossible. A huge thanks to .Net Junkie for assisting with this!

First we need to create an implementation of IDependencyInjectionBehavior, which will handle our ILogger type requests, and pass all other types requests to the standard implementation:

class SerilogContextualLoggerInjectionBehavior : IDependencyInjectionBehavior
    private readonly IDependencyInjectionBehavior _original;
    private readonly Container _container;

    public SerilogContextualLoggerInjectionBehavior(ContainerOptions options)
        _original = options.DependencyInjectionBehavior;
        _container = options.Container;

    public void Verify(InjectionConsumerInfo consumer) => _original.Verify(consumer);

    public InstanceProducer GetInstanceProducer(InjectionConsumerInfo i, bool t) =>
        i.Target.TargetType == typeof(ILogger)
            ? GetLoggerInstanceProducer(i.ImplementationType)
            : _original.GetInstanceProducer(i, t);

    private InstanceProducer<ILogger> GetLoggerInstanceProducer(Type type) =>
        Lifestyle.Transient.CreateProducer(() => Log.ForContext(type), _container);

This can then be set in our container setup:

_ontainer = new Container();
_container.Options.DependencyInjectionBehavior = new SerilogContextualLoggerInjectionBehavior(_container.Options);


And now our tests pass!


Thanks to this container usage, I no longer have to have the .ForContext(typeof(Something)) scattered throughout my codebases.

Hopefully this shows how taking away just some of the little tasks makes life easier - I now no longer have to remember to do the .ForContext on each class, and don’t need to have tests to validate it is done on each class (I have one test in my container configuration tests which validates this behaviour instead).

net, code, structuremap, simpleinjector, di, ioc


Getting Things Done

15 Jul 2017

I have been trying to actually be productive in my evenings and weekends, but I find I often end up not getting as much done as I feel I could have. I end up browsing imgur, reading slashdot, reddit, twitter, etc. rather than reading books, writing or anything else.

The first point doesn’t fit in anywhere else, but somewhere I saw a tip about keeping a house clean (I think):

If it takes less than 2 minutes to do, do it immediately

This has helped me a lot in keeping my work areas tidier (e.g. take empty tea cups back to the sink…), but I also find I am applying this to things while working too. Notice a spelling mistake in a Fix and pull request. Notice packages are out of date on a project? update them and pull request. Notice a method could be refactored to be clearer? Refactor and…you get the picture. Lots of little improvements add up. Katrina Owen does a great talk about Therapeutic Refactoring, and a lot of what she says applies to the fixing of small issues.

So here are some of the things which I have found help me to get things done.

A Good Environment

Being comfortable is super important. A slightly uncomfortable chair will niggle away at the back of your mind, and cause you to fidget. Try a standing desk if you find yourself getting restless sitting down all day.

If you can control the temperature of the room you are in (without causing office drama/war over it), do so! Nothing like the constant discomfort of being too hot to distract you. Oh, and if you want the room hotter, but someone else want’s it colder, let them have the coolness. You can put on more layers to stay warm, but they’re probably not allowed to take any more off by this point!

Lighting is another important part, and not just in making sure you are not straining your eyes, or coping with screen glare. One of my previous work locations had inset lights in the ceiling, but no diffusers underneath them, so in the corner of my eye all day was a bright halogen-type light. It was amazing how some days the light drove me crazy, and other days I barely noticed it.

A Good Virtual Environment

How often are you part way through a task, and you switch to reddit/twitter/slashdot/whatever to have a quick look? I never want to shut all my standard browsing tabs, but when I am working, I don’t want to be distracted by them either.

Luckily, Windows 10 finally added Virtual Desktop support in, which I use to create separate areas for different tasks. For example, the laptop I am typing on at the moment has 3 Virtual Desktops:

  1. General/day to day: chrome, spotify, etc
  2. Blog writing: atom, git bash, chrome (with only a tab of my local blog instance open)
  3. Current development project, so a couple of git bash windows, atom, rider, chrome with AWS tabs open

By doing this, when I get briefly distracted and tab to chrome…there is only task related tabs open, and I just keep on at the current task.


Something to listen to, which wont distract. Game and film sound tracks are amazing for this, as they are designed to be immersive, but also not to distract you from what is going on. Generally there are no words to them, so you don’t end up singing along either. Personally, I like using:


I find timeboxing tasks helps me a great deal. The method I like is called Pomodoro, and just involves doing 3x 25min tasks with 5min breaks after, and the 4th task gets a longer break after. I tend to use 20 minute timers for this, and allow myself to keep working a little if I wish to finish a paragraph or similar.

Setting tasks up for success helps a great deal too. For example, I find getting started on writing something very difficult. For example, this blog post has been rattling around in my head for at least a week. To help write it, I start off by writing a list of bullet point ideas, which goes into my drafts folder. When I sit down to write a blog post, I can hopefully start off by expanding a bullet point or two, and by the time that is done, I am in the writing zone.

And on

Hopefully all these little techniques will become habit soon, and I will find more along the way I am sure.



Terraform, Kinesis Streams, Lambda and IAM problems

12 Jul 2017

I hit an problem the recently with Terraform, when I was trying to hook up a Lambda Trigger to a Kinesis stream. Both the lambda itself, and the stream creation succeeded within Terraform, but the trigger would just stay stuck on “creating…” for at least 5 minutes, before I got bored of waiting and killed the process. Several attempts at doing this had the same issue.

The code looked something along the lines of this:

data "archive_file" "consumer_lambda" {
  type = "zip"
  source_dir = "./js/consumer"
  output_path = "./build/"

resource "aws_lambda_function" "kinesis_consumer" {
  filename = "${data.archive_file.consumer_lambda.output_path}"
  function_name = "kinesis_consumer"
  role = "${aws_iam_role.consumer_role.arn}"
  handler = "index.handler"
  runtime = "nodejs6.10"
  source_code_hash = "${base64sha256(file("${data.archive_file.consumer_lambda.output_path}"))}"
  timeout = 300 # 5 mins

resource "aws_kinesis_stream" "replay_stream" {
  name = "replay_stream"
  shard_count = 1

resource "aws_lambda_event_source_mapping" "kinesis_replay_lambda" {
  event_source_arn = "${aws_kinesis_stream.replay_stream.arn}"
  function_name = "${aws_lambda_function.kinesis_consumer.arn}"
  starting_position = "TRIM_HORIZON"

resource "aws_iam_role" "consumer_role" {
  name = "consumer_role"
  assume_role_policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": [""]
      "Effect": "Allow",

resource "aws_iam_role_policy" "consumer_role_policy" {
  name = "consumer_role_policy"
  role = "${}"
  policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "Stmt1493060054000",
      "Effect": "Allow",
      "Action": ["lambda:InvokeAsync", "lambda:InvokeFunction"],
      "Resource": ["arn:aws:lambda:*:*:*"]
      "Effect": "Allow",
      "Action": ["s3:GetObject*", "s3:PutObject*"],
      "Resource": ["arn:aws:s3:::*"]

I decided to try creating the trigger manually in AWS, which gave me the following error:

There was an error creating the trigger: Cannot access stream arn:aws:kinesis:eu-west-1:586732038447:stream/test. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, and ListStreams Actions on your stream in IAM.

All I had to do to fix this was to change my consumer_role_policy to include the relevant permissions:

    "Effect": "Allow",
    "Action": [
    "Resource": "arn:aws:kinesis:*:*:*"


  • Terraform could do with better errors - preferably in nice red text telling me I am doing things wrong!
  • AWS told me exactly what was needed - Good error messages in AWS, so no need to spend hours googling which permissions would be needed.

code, aws, terraform, s3


S3 Multi-File upload with Terraform

23 Apr 2017

Hosting a static website with S3 is really easy, especially from terraform:

First off, we want a public readable S3 bucket policy, but we want to apply this only to one specific bucket. To achive that we can use Terraform’s template_file data block to merge in a value:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
      "Resource": [

As you can see the interpolation syntax is pretty much the same as how you use variables in terraform itself. Next we define a template_file to do the transformation. As the bucket name is going to be used many times, we extract that into a variable block also:

variable "bucket" {
  default = "examplebucket"

data "template_file" "s3_public_policy" {
  template = "${file("policies/s3-public.json")}"
  vars {
    bucket_name = "${var.bucket}"

Next we want to create the S3 bucket and set it to be a static website, which we can do using the website sub block. For added usefulness, we will also define an output to show the website url on the command line:

resource "aws_s3_bucket" "static_site" {
  bucket = "${var.bucket}"
  acl = "public-read"
  policy = "${data.template_file.s3_public_policy.rendered}"

  website {
    index_document = "index.html"

output "url" {
  value = "${aws_s3_bucket.static_site.bucket}.s3-website-${var.region}"

Single File Upload

If you just want one file in the website (say the index.html file), then you can add the following block. Just make sure the key property matches the index_document name in the aws_s3_bucket block.

resource "aws_s3_bucket_object" "index" {
  bucket = "${aws_s3_bucket.static_site.bucket}"
  key = "index.html"
  source = "src/index.html"
  content_type = "text/html"
  etag = "${md5(file("src/index.html"))}"

Multi File Upload

Most websites need more than one file to be useful, and while we could write out an aws_s3_bucket_object block for every file, that seems like a lot of effort. Other options include manually uploading the files to S3, or using the aws cli to do it. While both methods work, they’re error prone - you need to specify the content_type for each file for them to load properly, and you can’t change this property once a file is uploaded.

To get around this, I add one more variable to my main terraform file, and generate a second file with all the aws_s3_bucket_object blocks in I need.

The added variable is a lookup for mime types:

variable "mime_types" {
  default = {
    htm = "text/html"
    html = "text/html"
    css = "text/css"
    js = "application/javascript"
    map = "application/javascript"
    json = "application/json"

I then create a shell script which will write a new file containing a aws_s3_bucket_object block for each file in the src directory:

#! /bin/sh


cat > $TF_FILE ''

find $SRC -iname '*.*' | while read path; do

    cat >> $TF_FILE << EOM

resource "aws_s3_bucket_object" "file_$COUNT" {
  bucket = "\${aws_s3_bucket.static_site.bucket}"
  key = "${path#$SRC}"
  source = "$path"
  content_type = "\${lookup(var.mime_types, "${path##*.}")}"
  etag = "\${md5(file("$path"))}"

    COUNT=$(expr $COUNT + 1)


Now when I want to publish a static site, I just have to make sure I run ./ once before my terraform plan and terraform apply calls.


This technique has one major drawback: it doesn’t work well with updating an existing S3 bucket. It won’t remove files which are no longer in the terraform files, and can’t detect file moves.

However, if you’re happy with a call to terraform destroy before applying, this will work fine. I use it for a number of test sites which I don’t tend to leave online very long, and for scripted aws infrastructure that I give out to other people so they can run their own copy.

code, aws, terraform, s3


Don't write Frameworks, write Libraries

16 Apr 2017

Programmers have a fascination with writing frameworks for some reason. There are many problems with writing frameworks:


Frameworks are opinionated, and will follow their author’s opinions on how things should be done, such as application structure, configuration, and methodology. The problem this gives is that not everyone will agree with the author, or their framework’s opinions. Even if they really like part of how the framework works, they might not like another part, or might not be able to rewrite their application to take advantage of the framework.


The level of configuration available in a framework is almost never correct. Not only is there either too little or too much configuration options, but how the configuration is done can cause issues. Some developers love conventions, other prefer explicit configuration.


Frameworks suffer from the danger of not solving the right problem, or missing the problem due to how long it took to implement the framework. This is compounded by when a framework is decided to be developed, which is often way before the general case is even recognised. Writing a framework before writing your project is almost certain to end up with a framework which either isn’t suitable for the project, or isn’t suitable for any other projects.

What about a library or two?

If you want a higher chance at success, reduce your scope and write a library.

A library is usually a small unit of functionality, and does one thing and does it well (sound like microservices or Bounded Contexts much?). This gives it a higher chance of success, as the opinions of the library are going to effect smaller portions of peoples applications. It won’t dictate their entire app structure. They can opt in to using the libraries they like, rather than all the baggage which comes with a framework.

But I really want to write a framework

Resist, if you can! Perhaps a framework will evolve from your software, perhaps not. What I have found to be a better path is to create libraries which work on their own, but also work well with each other. This can make this more difficult, but it also give you the ability to release libraries as they are completed, rather than waiting for an entire framework to be “done”.

Some examples

These are some libraries I have written which solve small problems in an isolated manner

  • Stronk - A library to populate strong typed configuration objects.
  • FileSystem - Provides a FileSystem abstraction, with decorators and an InMemory FileSystem implementation.
  • Finite - a Finite State Machine library.
  • Conifer - Strong typed, Convention based routing for WebAPI, also with route lookup abilities

So why not write some libraries?

architecture, code