Edge.js for Embedded Webuis

04 Aug 2014

We work we have a number of windows services which each have a lot of stats they could expose. Currently they are only interrogatable by the logfiles and from any notifications we receive.

I have been toying with the idea of hosting a website in-process which would give a simple dashboard ui and access to a live view of the log file. The idea first struck me when I was experimenting with FubuMvc, as they have an EmbeddedFubuMvcServer, which is very easy to use:

FubuMvcPackageFacility.PhysicalRootPath = @"Backend\";

using (var server = EmbeddedFubuMvcServer.For<EmbeddedBackend>(FubuMvcPackageFacility.PhysicalRootPath))
{

    Console.WriteLine("Some long running process, with a web-backend on :5500");

    var p = server.Services.GetInstance<IProcessor>();

    var t = new Task(p.Start);
    t.Start();

    Console.ReadKey();
}

But while I like this, FubuMvc embedded seems like overkill.

Wouldn't it be nice if we could host an expressjs app inside our process? They are very lightweight, and to get one setup is almost no coding (especially if you use the express commandline tool).

Enter Edgejs

The Edge.js project provides an in-process bridge between the .net and nodejs worlds, and allows for communication between the two...

Steps:

  • Create a new application (eg: ServiceWithEdge)

  • Create a subdirectory for the webui in your applications root (eg, next to the csproj file)

    • ServiceWithEdge\ServiceWithEdge\webui
  • If you don't have express-generator installed, get it:

    • npm install -g express-generator
  • Cd to your webui directory, and create an express application:

    • express - there are some options if you want, see the guide
  • In visual studio, include the webui directory

    • Mark all files as content and copy if newer
  • Add a new js file in your webui root:

var options;

exports.set = function (m) {
    options = m;
};

exports.getModel = function (modelName, action) {

    options.getModel(modelName, function (error, result) {

        if (error) throw error;

        action(result);
    });

};
  • add the edgejs package:

    • PM> install-package edge.js
  • The following function will run the webui, and inject a callback for getting models from .net

private static void RunWebui(ModelStore store)
{
    var func = Edge.Func(@"
        var app = require('../webui/app');
        var com = require('../webui/communicator');

        app.set('port', process.env.PORT || 3000);

        var server = app.listen(app.get('port'));

        return function(options, callback) {
            com.set(options);
        };
    ");

    var getModel = (Func<object, Task<object>>)(async (message) =>
    {
        return store.GetModel((string)message);
    });


    Task.Run(() => func(new
    {
        getModel
    }));
}
  • The last step to getting this to work is running npm install in the webui directory of the build output folder. I use a rake file to build everything, so its just an extra task (see the entire Rakefile here):
task :npm do |t|

    Dir.chdir "#{project_name}/bin/debug/webui" do
        system 'npm', 'install'
    end

end
ny route needing data from .net just needs to require the communicator file and call `getModel`:
var com = require('../communicator');

router.get('/', function (req, res) {

    com.getModel("index", function(value) {

        res.render('index', {
            title: 'Express',
            result: value.Iterations
        });

    });

});

All the code is available on github.

How I am aiming to use it

I am planning on constructing a nuget package to do all of this, so that all a developer needs to do is add the package, and configure which statistics they wish to show up on the web ui.

design, code, net, typing, sql, database, orm

---

Configuring Dapper to work with custom types

22 Jul 2014

In the last post we looked at using custom ID types to help abstract the column type from the domain.

This works well until you start trying to load and save entities using an ORM, as the ORM has not way to know how to map a column to a custom type. ORMs provide extension points to allow you to create these mappings. As I tend to favour using Dapper, we will go through setting it up to work with our custom ID types.

We need to be able to get the raw value out of the id type, but without exposing this to the outside world. To do this we internal interface:

internal interface IValueID
{
    object Value();
}

Then update our id struct with a private implementation of the interface, and also mark the only constructor as internal:

public struct PersonID : IValueID
{
    private readonly Guid _id;

    internal PersonID(Guid id)
    {
        _id = id;
    }

    object IValueID.Value()
    {
        return _id;
    }
}

We now can define a class which Dapper can use to do the mapping from uuid to id:

public class PersonIDHandler : SqlMapper.TypeHandler<PersonID>
{
    public override void SetValue(IDbDataParameter parameter, PersonID value)
    {
        parameter.Value = ((IValueID)value).Value();
    }

    public override PersonID Parse(object value)
    {
        return new PersonID((Guid)value);
    }
}

We then need to regiter the command with Dapper once on start up of our application:

SqlMapper.AddTypeHandler(new PersonIDHandler());

Now when Dapper loads an object with a property type of PersonID it will invoke the Parse method on PersonIDHandler, and populate the resulting object correctly. It will also work when getting a value from the PersonID property, invoking the SetValue method on PersonIDHandler.

Extension

While the PersonIDHandler works, I really don't want to be creating essentially the same class over and over again for each ID type. We can fix this by using a generic id handler class, and some reflection magic.

We start off by creating a generic class for id handling:

public class CustomHandler<T> : SqlMapper.TypeHandler<T>
{
    private readonly Func<Object, T> _createInstance;

    public CustomHandler()
    {
        var ctor = typeof(T)
            .GetConstructors()
            .Single(c => c.GetParameters().Count() == 1);

        var paramType = ctor
            .GetParameters()
            .First()
            .ParameterType;

        _createInstance = (value) => (T)ctor.Invoke(new[] { Convert.ChangeType(value, paramType) });
    }

    public override void SetValue(IDbDataParameter parameter, T value)
    {
        parameter.Value = ((IValueID)value).Value();
    }

    public override T Parse(object value)
    {
        return _createInstance(value);
    }
}

The constructor of this class just finds a single constructor on our ID type with one argument, and creates a Func which will create an instance of the id passing in the value. We put all this constructor discovery logic into the CustomHandler's constructor as this information only needs to be calculated once, and can then be used for every Parse call.

We then need to write something to build an instance of this for each ID type in our system. As all of our IDs need to implement IValueID to work, we can scan for all types in the assembly implementing this interface, and then operate on those.

public class InitialiseDapper : IApplicationStart
{
    public void Initialise()
    {
        var interfaceType = typeof(IValueID);

        var idTypes = interfaceType
            .Assembly
            .GetTypes()
            .Where(t => t.IsInterface == false)
            .Where(t => t.IsAbstract == false)
            .Where(t => t.GetInterfaces().Contains(interfaceType));

        var handler = typeof(CustomHandler<>);

        foreach (var idType in idTypes)
        {
            var ctor = handler
                .MakeGenericType(new[] { idType })
                .GetConstructor(Type.EmptyTypes);

            var instance = (SqlMapper.ITypeHandler)ctor.Invoke(new object[] { });

            SqlMapper.AddTypeHandler(idType, instance);
        }
    }
}

This class first scans the assembly containing IValueID for all types implementing IValueID which are not abstract, and not interfaces themselves. It then goes through each of these types, and builds a new instance of CustomHandler for each type, and registers it with Dapper.

You might notice this is in a class which implements IApplicationStart - In most of my larger projects, I tend to have an interface like this, which defines a single void Initialise(); method. Implementations of the interface get looked for on startup of the application, and their Initialise method called once each.

design, code, net, typing, sql, database, orm

---

Strong Type your entity IDs.

17 Jul 2014

The Database is just an Implementation Detail

A quote from Martin Fowler given during his Architecture talk stated that the Database in your application should just be an implementation detail. I agree on this wholeheartedly and find that its really not that difficult to achieve if you think about your architecture carefully.

Having said that, I still see parts of the database implementation leaking out into the domain, mainly in the form of IDs. This might not seem like much of a leak, but it does cause a few problems, especially on larger systems.

The first problem ocours when you have a function taking in an ID of some form, and the parameter name is not really forthcoming on what object's ID it's expecting. This is especially problematic if your ID columns are int based, rather than uuids, as passing any int to the function will return data - just not necessarily the data you were expecting.

The second problem is that it ties you to using the same ID type as the database is using. If the database is just an implementation detail, then it definitely should not be dictating what types your domain should be using.

For example, take the following two classes:

public class Account
{
    public int ID { get; }
    //...
}

public class User
{
    public int ID { get; }
    public IEnumerable<Account> Accounts { get; }
}

The two classes on their own are not unreasonable, but the use of an int for the ID is problematic. Given the following method:

public DateTime GetLastActiveDate(int userID)
{
    return ...
}

Both of the following calls are valid, and neither the code nor the compiler will tell you which one is correct (if any!):

var date1 = GetLastActiveDate(_user.ID);
var date2 = GetLastActiveDate(_user.Accounts.First().ID);

Using the Type System to prevent bad arguments

We can fix this problem by using the Type System to force the correct ID type to be passed in.

First we need to abstract the notion of an ID to be separate from what type its value is. To do this we create some structs, one for each ID in our system:

public struct UserID
{
    private readonly int _value;

    public UserID(int value)
    {
        _value = value;
    }

    public override int GetHashCode()
    {
        return _value;
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }
}

public struct AccountID
{
    private readonly int _value;

    public AccountID(int value)
    {
        _value = value;
    }

    public override int GetHashCode()
    {
        return _value;
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }
}

Both of our structs store their values immutably so that they cannot be changed after creation, and we override GetHashCode and Equals so that separate instances can be compared for equality properly. Note also that there is no inheritance between the two structs - we do not want the ability for a method to expect a UserID and find someone passing in an AccountID because it inherits.

We can now update our objects to use these IDs:

public class Account
{
    public AccountID ID { get; }
    //...
}

public class User
{
    public UserID ID { get; }
    public IEnumerable<Account> Accounts { get; }
}

And update any method which expects an ID now gets the specific type:

public DateTime GetLastActiveDate(UserID userID)
{
    return ...
}

This means that when someone writes this:

var date = GetLastActiveDate(_user.Accounts.First().ID);

The compiler will complain with an error: Unable to cast type 'AccountID to type 'UserID``.

Abstracting column type

By doing this work to use custom types instead of native types for our IDs gives us another benefit: we can hide what type the database is using from the domain, meaning we could change our table's key to be a uuid, and the only place we would need to change in code would be the relevant ID class.

Extra functionality

One more benefit that comes from this approach is that our IDs are now first class citizens in the type world, and we can imbue them with extra functionality.

A system I use has a table with both a uuid column for the primary key, and an int based refnum column for displaying to users, something like this:

person:
id : uuid, forename : varchar(50), surname : varchar(50), dateofbirth : date, refnum : int

As we have a PersonID type, we can make that hold both values, and override the ToString method so that when called it displays the user friendly ID:

public struct PersonID
{
    private readonly Guid _id;
    private readonly int _refnum;

    public PersonID(Guid id, int refnum)
    {
        _id = id;
        _refnum = refnum;
    }

    public override int GetHashCode()
    {
        return _value.GetHashCode();
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }

    public override string ToString()
    {
        return _refnum.ToString()
    }
}

This means that if in the future we decided to convert to using the refnum as the primary key, and drop the uuid column, again all we would need to do would be to update the PersonID type, and the rest of our code base would be unaffected.

design, code, net, typing, sql, database, orm

---

Specific Interfaces

08 Jun 2014

While writing my CruiseCli project, I needed to do some data storage, and so used my standard method of filesystem access, the IFileSystem. This is an interface and implementation which I tend to copy from project to project, and use as is. The interface looks like the following:

public interface IFileSystem
{
    bool FileExists(string path);
    void WriteFile(string path, Stream contents);
    void AppendFile(string path, Stream contents);
    Stream ReadFile(string path);
    void DeleteFile(string path);

    bool DirectoryExists(string path);
    void CreateDirectory(string path);
    IEnumerable<string> ListDirectory(string path);
    void DeleteDirectory(string path);
}

And the standard implementation looks like the following:

public class FileSystem : IFileSystem
{
    public bool FileExists(string path)
    {
        return File.Exists(path);
    }

    public void WriteFile(string path, Stream contents)
    {
        using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write))
        {
            contents.CopyTo(fs);
        }
    }

    public Stream ReadFile(string path)
    {
        return new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
    }
    //snip...
}

This (I think) is a very good solution to file system access as I can easily mock the interface and add expectations and stub values to it for testing.

However, on the CruiseCli project, I realised I didn't need most of what the interface provided, so I chopped all the bits off I didn't want, and added a property for a base directory I was using all the time:

public interface IFileSystem
{
    string HomePath { get; }

    void WriteFile(string path, Stream contents);
    Stream ReadFile(string path);
    bool FileExists(string path);
}

Which was better than the original, as I have a lot less methods to worry about, and thus it is more specific to my use case.

But I got thinking later in the project; "what are my use cases?", "what do I actually want to do with the filesystem?" The answer to this was simple: Read a config file, and write to the same config file. Nothing else.

So why not make the interface even more specific in this case:

public interface IConfiguration
{
    void Write(Stream contents);
    Stream Read();
}

Even simpler, and I now have the benefit of not caring what the filepaths are outside of the implementing class.

This means that in my integration tests, I can write an in-memory IConfiguration with far less hassle, and not need to worry about fun things like character encoding and case sensitivity on filepaths!

In a more complicated system, I would probably keep this new IConfiguration interface for accesing the config file, and make the concrete version depend on the more general IFileSystem:

public class Configuration : IConfiguration
{
    private const string FileName = ".cruiseconfig";
    private readonly IFileSystem _fileSystem;

    public Configuration(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public void Write(Stream contents)
    {
        _fileSystem.WriteFile(Path.Combine(_fileSystem.Home, FileName), contents);
    }

    public Stream Read()
    {
        return _fileSystem.ReadFile(Path.Combine(_fileSystem.Home, FileName));
    }
}

For a small system this would probably be overkill, but for a much larger project, this could help provide a better seperation of responsibilities.

design,, code,, net

---

Using StructureMap Registries for better separation

19 May 2014

When it comes to configuring StructureMap, it supports the use of Registries. Registries support everything that the standard configure method does(new Container(c => { /* */});).

There are two main reasons that I use the registries rather then doing all my configuration in the Container's lambda: separation of concerns (one registry per area of code) and easier testing (which we will go into shortly).

The only down side I can see to using registries is that it can scatter your configuration across your codebase - but if you have ReSharper, doing a 'Find Implementations' on Registry will find them all for you, so it really isn't much of a down side.

Separation of Concerns

Taking NuCache as an example, in our app start we have ConfigureContainer.cs:

public static void Register(HttpConfiguration config)
{
    var container = new Container(c => c.Scan(a =>
    {
        a.TheCallingAssembly();
        a.WithDefaultConventions();
        a.LookForRegistries();
    }));

    config.DependencyResolver = new StructureMapDependencyResolver(container);
}

This snippet of code gets called as part of the AppStart, and tells StructureMap to use the default conventions (eg: IFileSystem => FileSystem), and to process any registries it finds. The app then has multiple Registries with the actual configuration in (usually one per namespace, although not all namespaces have a registry).

For example, we have these two registries:

public class InfrastructureRegistry : Registry
{
    public InfrastructureRegistry()
    {
        For<IPackageCache>()
            .Use<FileSystemPackageCache>()
            .OnCreation(c => c.Initialise())
            .Singleton();
    }
}

public class ProxyBehaviourRegistry : Registry
{
    public ProxyBehaviourRegistry ()
    {
        Scan(a =>
        {
            a.TheCallingAssembly();
            a.AddAllTypesOf<IProxyBehaviour>();
        });
    }
}

The InfrastructureRegistry just specifies how to resolve an IPackageCache, as it has requires some extra initialisation and to be treated as a singleton.

The ProxyBehaviourRegistry tells StructureMap to add all implementations of IProxyBehaviour, so that when we construct as ProxyBehaviourSet, which has a constructor parameter of IEnumerable<IProxyBehaviour> all the implementations are passed in for us.

Easier Testing

We can use the Registry feature of StructureMap to allow us to test parts of code as they would be in production. This mostly applies to acceptance style testing, for example when I am testing the XmlRewriter, I want it to behave exactly as it would in production, with the same IXElementTransforms passed in.

To do this, we can use the RewriterRegistry:

var container = new Container(new RewriterRegistry());
var rewriter = container.GetInstance<XmlRewriter>();

Here we create a new container with the RewriterRegistry passed directly into the constructor. This gives us access to a container completely configured for using the XmlRewriter. We can then fake the inputs and outputs to the method under test, keeping the whole system in a known production-like state.

using (var inputStream = GetType().Assembly.GetManifestResourceStream("NuCache.Tests.Packages.xml"))
using (var outputStream = new MemoryStream())
{
    rewriter.Rewrite(targetUri, inputStream, outputStream);
    outputStream.Position = 0;

    _result = XDocument.Load(outputStream);
    _namespace = _result.Root.Name.Namespace;
}

Hopefully this shows how useful and powerful feature StructureMap's Registries are.

design,, code,, net,, structuremap,, separation,, testing

---