Good Design in Warcraft Addons/Lua

23 Nov 2014

Lack of Encapsulation in Addons

I first noticed a lack of good design in addon code when I started trying to tweak existing addons to be slightly different.

One of the stand out examples was a Threat Meter (you know which one I mean). It works well, but I felt like writing my own, to make it really fit into my UI, with as little overhead as possible. Not knowing how to even begin writing a Threat Meter, I downloaded a copy, and opened its source directory... to discover that the entire addon is one 3500+ line file, and 16 Ace.* dependencies.

When I had finished my Threat Meter, I had two files (170 lines and 130 lines), and one dependency (Dark.Core, which all my addons use). I learnt a lot while reading the source for the original threat meter - it is very customisable, is externally skinable, and has some very good optimisations in it. But it also has a lot of unused variables (which are named very similarly to used ones), and so much of it's code could be separated out, making it easier to modify by newer project members.

This set of observations goes on forever when concerning addons. The three main problems I see are:

  • Pollution of the global namespace
  • All code in one file
  • No separation of concerns

All of this makes it harder for new developers to pick up and learn how to maintain and write addons. They are all fairly straight forward to solve problems, so lets address them!

Pollution of the Global Namespace

A lot of addons you find declare many variables as global so they can access them anywhere within their addon. For example, this is pretty standard:

MyAddonEvents = CreateFrame("Frame", "MyAddonEventFrame")

MyAddonEvents:RegisterEvent("PLAYER_ENTERING_WORLD")
MyAddonEvents:SetScript("OnEvent", MyAddonEventHandler)

MyAddonEventHandler = function(self, event, ...)
    
    if event == "PLAYER_ENTERING_WORLD" then
        --do something useful
    end
end

This is an example of poluting the global namespace, as now the entire UI has access to: MyAddonEvents, MyAddonEventFrame, MyAddonEventHandler. This is very trivial to rewrite to not expose anything to the global namespace:

local events = CreateFrame("Frame")
local handler = function(self, event, ...)
    
    if event == "PLAYER_ENTERING_WORLD" then
        --do something useful
    end

end

events:RegisterEvent("PLAYER_ENTERING_WORLD")
events:SetScript("OnEvent", handler)

This version exposes nothing to the global namespace, and performs exactly the same function (you can even get rid of the handler variable and just pass the function directly into SetScript).

However, by writing your code like this, you can't access any of this from another file (either a lua file, or shudder a frameXml file), but using namespaces we can get around this limitation without polluting the global namespace.

Splitting into Separate Files

So, how to access local variables in other files? Well Warcraft addons come with a feature where all lua files are provided with two arguments: addon and ns. The first of these is a string of the addon name, and the second is an empty table. I almost never use the addon parameter, but the ns (or "namespace") parameter is key to everything.

You can access these two variables by writing this as the first line of your lua file:

local addon, ns = ...

print("Hello from, " .. addon)

By using the ns, we can put our own variables into it to access from other files. For example, we have an event system in one file:

eventSystem.lua

local addon, ns = ...

local events = CreateFrame("Frame")
local handlers = {}

events:SetScript("OnEvent", function(self, event, ...)
    
    local eventHandlers = handlers[event] or {}

    for i, handler in ipairs(eventHandlers) do 
        handler(event, ...)
    end

end)

ns.register = function(event, handler)
    
    handlers[event] = handlers[event] or {}
    table.insert(handlers[event], handler)

    events:RegisterEvent(event)

end

Note how the register function is defined on the ns. This means that any other file in our addon can do this to handle an event:

goldPrinter.lua

local addon, ns = ...

ns.register("PLAYER_MONEY", function() 
    
    local gold = floor(money / (COPPER_PER_SILVER * SILVER_PER_GOLD))
    local silver = floor((money - (gold * COPPER_PER_SILVER * SILVER_PER_GOLD)) / COPPER_PER_SILVER)
    local copper = mod(money, COPPER_PER_SILVER)

    local moneyString = ""
    local separator = ""

    if ( gold > 0 ) then
        moneyString = format(GOLD_AMOUNT_TEXTURE, gold, 0, 0)
        separator = " "
    end
    if ( silver > 0 ) then
        moneyString = moneyString .. separator .. format(SILVER_AMOUNT_TEXTURE, silver, 0, 0)
        separator = " "
    end
    if ( copper > 0 or moneyString == "" ) then
        moneyString = moneyString .. separator .. format(COPPER_AMOUNT_TEXTURE, copper, 0, 0)
    end

    print("You now have " .. moneyString)

end)

A pretty trivial example, but we have managed to write a two file addon, without putting anything in the global namespace.

We have also managed to separate our concerns - the goldPrinter does not care what raises the events, and the eventSystem knows nothing about gold printing, just how to delegate events. There is also an efficiency here too - anything else in our addon that needs events uses the same eventSystem, meaning we only need to create one frame for the entire addon to receive events.

Structure

Now that we can separate things into individual files, we gain a slightly different problem - how to organise those files. I found over time that I end up with roughly the same structure each time, and others might benefit from it too.

All my addons start with four files:

  • AddonName.toc
  • initialise.lua
  • config.lua
  • run.lua

The toc file, other than the usual header information is laid out in the order the files will run, for example this is the file segment of my bags addon's toc file:

initialise.lua
config.lua

models\classifier.lua
models\classifiers\equipmentSet.lua
models\itemModel.lua
models\model.lua

groups\group.lua
groups\bagGroup.lua
groups\bagContainer.lua

views\item.lua
views\goldDisplay.lua
views\currencyDisplay.lua
views\bankBagBar.lua

sets\container.lua
sets\bag.lua
sets\bank.lua

run.lua

The initialise lua file is the first thing to run. All this tends to do is setup any sub-namespaces on ns, and copy in external dependencies to ns.lib:

local addon, ns = ...

ns.models = {}
ns.groups = {}
ns.views = {}
ns.sets = {}

local core = Dark.core

ns.lib = {
    fonts = core.fonts,
    events = core.events,
    slash = core.slash,
}

By copying in the dependencies, we not only save a global lookup each time we need say the event system, but we also have an abstraction point. If we want to replace the event system, as long as the replacement has the right function names, we can just assign the new one to the lib: ns.lib.events = replacementEvents:new()

The sub namespaces correspond to folders on in the addon (much the same practice used by c# developers), so for example the classifier.lua file might have this in it:

local addon, ns = ...

local classifier = {
    new = function() end,
    update = function() end,
    classify = function(item) end,
}

ns.models.classifier = classifier

The config file should be fairly simple, with not much more than a couple of tables in it:

local addon, ns = ...

ns.config = {
    buttonSize = 24,
    spacing = 4,
    screenPadding = 10,
    currencies = {
        823, -- apexis
        824,  -- garrison resources
    }
}

And finally, the run.lua file is what makes your addon come to life:

local addon, ns = ...

local sets = ns.sets

local pack = sets.bag:new()
local bank = sets.bank:new()

local ui = ns.controllers.uiIntegration.new(pack.frame, bank.frame)
ui.hook()

--expose 
DarkBags = {
    addClassifier = ns.classifiers.add
}

If you need to expose something to the entire UI or other addons, that's fine. But make sure you only expose what you want to. In the example above the DarkBags global only has one method - addClassifier, because that is all I want other addons to be able to do.

Wrapping Up

I hope this helps other people with their addons - I know I wish that I had gotten to this structure and style a lot sooner than I did.

There will be a few more posts incoming covering encapsulation, objects and inheritance in more detail, so stay tuned.

design, code, lua, warcraft

---

Edge.js for Embedded Webuis

04 Aug 2014

We work we have a number of windows services which each have a lot of stats they could expose. Currently they are only interrogatable by the logfiles and from any notifications we receive.

I have been toying with the idea of hosting a website in-process which would give a simple dashboard ui and access to a live view of the log file. The idea first struck me when I was experimenting with FubuMvc, as they have an EmbeddedFubuMvcServer, which is very easy to use:

FubuMvcPackageFacility.PhysicalRootPath = @"Backend\";

using (var server = EmbeddedFubuMvcServer.For<EmbeddedBackend>(FubuMvcPackageFacility.PhysicalRootPath))
{

    Console.WriteLine("Some long running process, with a web-backend on :5500");

    var p = server.Services.GetInstance<IProcessor>();

    var t = new Task(p.Start);
    t.Start();

    Console.ReadKey();
}

But while I like this, FubuMvc embedded seems like overkill.

Wouldn't it be nice if we could host an expressjs app inside our process? They are very lightweight, and to get one setup is almost no coding (especially if you use the express commandline tool).

Enter Edgejs

The Edge.js project provides an in-process bridge between the .net and nodejs worlds, and allows for communication between the two...

Steps:

  • Create a new application (eg: ServiceWithEdge)

  • Create a subdirectory for the webui in your applications root (eg, next to the csproj file)

    • ServiceWithEdge\ServiceWithEdge\webui
  • If you don't have express-generator installed, get it:

    • npm install -g express-generator
  • Cd to your webui directory, and create an express application:

    • express - there are some options if you want, see the guide
  • In visual studio, include the webui directory

    • Mark all files as content and copy if newer
  • Add a new js file in your webui root:

var options;

exports.set = function (m) {
    options = m;
};

exports.getModel = function (modelName, action) {

    options.getModel(modelName, function (error, result) {

        if (error) throw error;

        action(result);
    });

};
  • add the edgejs package:

    • PM> install-package edge.js
  • The following function will run the webui, and inject a callback for getting models from .net

private static void RunWebui(ModelStore store)
{
    var func = Edge.Func(@"
        var app = require('../webui/app');
        var com = require('../webui/communicator');

        app.set('port', process.env.PORT || 3000);

        var server = app.listen(app.get('port'));

        return function(options, callback) {
            com.set(options);
        };
    ");

    var getModel = (Func<object, Task<object>>)(async (message) =>
    {
        return store.GetModel((string)message);
    });


    Task.Run(() => func(new
    {
        getModel
    }));
}
  • The last step to getting this to work is running npm install in the webui directory of the build output folder. I use a rake file to build everything, so its just an extra task (see the entire Rakefile here):
task :npm do |t|

    Dir.chdir "#{project_name}/bin/debug/webui" do
        system 'npm', 'install'
    end

end
ny route needing data from .net just needs to require the communicator file and call `getModel`:
var com = require('../communicator');

router.get('/', function (req, res) {

    com.getModel("index", function(value) {

        res.render('index', {
            title: 'Express',
            result: value.Iterations
        });

    });

});

All the code is available on github.

How I am aiming to use it

I am planning on constructing a nuget package to do all of this, so that all a developer needs to do is add the package, and configure which statistics they wish to show up on the web ui.

design, code, net, typing, sql, database, orm

---

Configuring Dapper to work with custom types

22 Jul 2014

In the last post we looked at using custom ID types to help abstract the column type from the domain.

This works well until you start trying to load and save entities using an ORM, as the ORM has not way to know how to map a column to a custom type. ORMs provide extension points to allow you to create these mappings. As I tend to favour using Dapper, we will go through setting it up to work with our custom ID types.

We need to be able to get the raw value out of the id type, but without exposing this to the outside world. To do this we internal interface:

internal interface IValueID
{
    object Value();
}

Then update our id struct with a private implementation of the interface, and also mark the only constructor as internal:

public struct PersonID : IValueID
{
    private readonly Guid _id;

    internal PersonID(Guid id)
    {
        _id = id;
    }

    object IValueID.Value()
    {
        return _id;
    }
}

We now can define a class which Dapper can use to do the mapping from uuid to id:

public class PersonIDHandler : SqlMapper.TypeHandler<PersonID>
{
    public override void SetValue(IDbDataParameter parameter, PersonID value)
    {
        parameter.Value = ((IValueID)value).Value();
    }

    public override PersonID Parse(object value)
    {
        return new PersonID((Guid)value);
    }
}

We then need to regiter the command with Dapper once on start up of our application:

SqlMapper.AddTypeHandler(new PersonIDHandler());

Now when Dapper loads an object with a property type of PersonID it will invoke the Parse method on PersonIDHandler, and populate the resulting object correctly. It will also work when getting a value from the PersonID property, invoking the SetValue method on PersonIDHandler.

Extension

While the PersonIDHandler works, I really don't want to be creating essentially the same class over and over again for each ID type. We can fix this by using a generic id handler class, and some reflection magic.

We start off by creating a generic class for id handling:

public class CustomHandler<T> : SqlMapper.TypeHandler<T>
{
    private readonly Func<Object, T> _createInstance;

    public CustomHandler()
    {
        var ctor = typeof(T)
            .GetConstructors()
            .Single(c => c.GetParameters().Count() == 1);

        var paramType = ctor
            .GetParameters()
            .First()
            .ParameterType;

        _createInstance = (value) => (T)ctor.Invoke(new[] { Convert.ChangeType(value, paramType) });
    }

    public override void SetValue(IDbDataParameter parameter, T value)
    {
        parameter.Value = ((IValueID)value).Value();
    }

    public override T Parse(object value)
    {
        return _createInstance(value);
    }
}

The constructor of this class just finds a single constructor on our ID type with one argument, and creates a Func which will create an instance of the id passing in the value. We put all this constructor discovery logic into the CustomHandler's constructor as this information only needs to be calculated once, and can then be used for every Parse call.

We then need to write something to build an instance of this for each ID type in our system. As all of our IDs need to implement IValueID to work, we can scan for all types in the assembly implementing this interface, and then operate on those.

public class InitialiseDapper : IApplicationStart
{
    public void Initialise()
    {
        var interfaceType = typeof(IValueID);

        var idTypes = interfaceType
            .Assembly
            .GetTypes()
            .Where(t => t.IsInterface == false)
            .Where(t => t.IsAbstract == false)
            .Where(t => t.GetInterfaces().Contains(interfaceType));

        var handler = typeof(CustomHandler<>);

        foreach (var idType in idTypes)
        {
            var ctor = handler
                .MakeGenericType(new[] { idType })
                .GetConstructor(Type.EmptyTypes);

            var instance = (SqlMapper.ITypeHandler)ctor.Invoke(new object[] { });

            SqlMapper.AddTypeHandler(idType, instance);
        }
    }
}

This class first scans the assembly containing IValueID for all types implementing IValueID which are not abstract, and not interfaces themselves. It then goes through each of these types, and builds a new instance of CustomHandler for each type, and registers it with Dapper.

You might notice this is in a class which implements IApplicationStart - In most of my larger projects, I tend to have an interface like this, which defines a single void Initialise(); method. Implementations of the interface get looked for on startup of the application, and their Initialise method called once each.

design, code, net, typing, sql, database, orm

---

Strong Type your entity IDs.

17 Jul 2014

The Database is just an Implementation Detail

A quote from Martin Fowler given during his Architecture talk stated that the Database in your application should just be an implementation detail. I agree on this wholeheartedly and find that its really not that difficult to achieve if you think about your architecture carefully.

Having said that, I still see parts of the database implementation leaking out into the domain, mainly in the form of IDs. This might not seem like much of a leak, but it does cause a few problems, especially on larger systems.

The first problem ocours when you have a function taking in an ID of some form, and the parameter name is not really forthcoming on what object's ID it's expecting. This is especially problematic if your ID columns are int based, rather than uuids, as passing any int to the function will return data - just not necessarily the data you were expecting.

The second problem is that it ties you to using the same ID type as the database is using. If the database is just an implementation detail, then it definitely should not be dictating what types your domain should be using.

For example, take the following two classes:

public class Account
{
    public int ID { get; }
    //...
}

public class User
{
    public int ID { get; }
    public IEnumerable<Account> Accounts { get; }
}

The two classes on their own are not unreasonable, but the use of an int for the ID is problematic. Given the following method:

public DateTime GetLastActiveDate(int userID)
{
    return ...
}

Both of the following calls are valid, and neither the code nor the compiler will tell you which one is correct (if any!):

var date1 = GetLastActiveDate(_user.ID);
var date2 = GetLastActiveDate(_user.Accounts.First().ID);

Using the Type System to prevent bad arguments

We can fix this problem by using the Type System to force the correct ID type to be passed in.

First we need to abstract the notion of an ID to be separate from what type its value is. To do this we create some structs, one for each ID in our system:

public struct UserID
{
    private readonly int _value;

    public UserID(int value)
    {
        _value = value;
    }

    public override int GetHashCode()
    {
        return _value;
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }
}

public struct AccountID
{
    private readonly int _value;

    public AccountID(int value)
    {
        _value = value;
    }

    public override int GetHashCode()
    {
        return _value;
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }
}

Both of our structs store their values immutably so that they cannot be changed after creation, and we override GetHashCode and Equals so that separate instances can be compared for equality properly. Note also that there is no inheritance between the two structs - we do not want the ability for a method to expect a UserID and find someone passing in an AccountID because it inherits.

We can now update our objects to use these IDs:

public class Account
{
    public AccountID ID { get; }
    //...
}

public class User
{
    public UserID ID { get; }
    public IEnumerable<Account> Accounts { get; }
}

And update any method which expects an ID now gets the specific type:

public DateTime GetLastActiveDate(UserID userID)
{
    return ...
}

This means that when someone writes this:

var date = GetLastActiveDate(_user.Accounts.First().ID);

The compiler will complain with an error: Unable to cast type 'AccountID to type 'UserID``.

Abstracting column type

By doing this work to use custom types instead of native types for our IDs gives us another benefit: we can hide what type the database is using from the domain, meaning we could change our table's key to be a uuid, and the only place we would need to change in code would be the relevant ID class.

Extra functionality

One more benefit that comes from this approach is that our IDs are now first class citizens in the type world, and we can imbue them with extra functionality.

A system I use has a table with both a uuid column for the primary key, and an int based refnum column for displaying to users, something like this:

person:
id : uuid, forename : varchar(50), surname : varchar(50), dateofbirth : date, refnum : int

As we have a PersonID type, we can make that hold both values, and override the ToString method so that when called it displays the user friendly ID:

public struct PersonID
{
    private readonly Guid _id;
    private readonly int _refnum;

    public PersonID(Guid id, int refnum)
    {
        _id = id;
        _refnum = refnum;
    }

    public override int GetHashCode()
    {
        return _value.GetHashCode();
    }

    public override bool Equals(object obj)
    {
        return obj is ProductID && GetHashCode() == obj.GetHashCode();
    }

    public override string ToString()
    {
        return _refnum.ToString()
    }
}

This means that if in the future we decided to convert to using the refnum as the primary key, and drop the uuid column, again all we would need to do would be to update the PersonID type, and the rest of our code base would be unaffected.

design, code, net, typing, sql, database, orm

---

Specific Interfaces

08 Jun 2014

While writing my CruiseCli project, I needed to do some data storage, and so used my standard method of filesystem access, the IFileSystem. This is an interface and implementation which I tend to copy from project to project, and use as is. The interface looks like the following:

public interface IFileSystem
{
    bool FileExists(string path);
    void WriteFile(string path, Stream contents);
    void AppendFile(string path, Stream contents);
    Stream ReadFile(string path);
    void DeleteFile(string path);

    bool DirectoryExists(string path);
    void CreateDirectory(string path);
    IEnumerable<string> ListDirectory(string path);
    void DeleteDirectory(string path);
}

And the standard implementation looks like the following:

public class FileSystem : IFileSystem
{
    public bool FileExists(string path)
    {
        return File.Exists(path);
    }

    public void WriteFile(string path, Stream contents)
    {
        using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write))
        {
            contents.CopyTo(fs);
        }
    }

    public Stream ReadFile(string path)
    {
        return new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
    }
    //snip...
}

This (I think) is a very good solution to file system access as I can easily mock the interface and add expectations and stub values to it for testing.

However, on the CruiseCli project, I realised I didn't need most of what the interface provided, so I chopped all the bits off I didn't want, and added a property for a base directory I was using all the time:

public interface IFileSystem
{
    string HomePath { get; }

    void WriteFile(string path, Stream contents);
    Stream ReadFile(string path);
    bool FileExists(string path);
}

Which was better than the original, as I have a lot less methods to worry about, and thus it is more specific to my use case.

But I got thinking later in the project; "what are my use cases?", "what do I actually want to do with the filesystem?" The answer to this was simple: Read a config file, and write to the same config file. Nothing else.

So why not make the interface even more specific in this case:

public interface IConfiguration
{
    void Write(Stream contents);
    Stream Read();
}

Even simpler, and I now have the benefit of not caring what the filepaths are outside of the implementing class.

This means that in my integration tests, I can write an in-memory IConfiguration with far less hassle, and not need to worry about fun things like character encoding and case sensitivity on filepaths!

In a more complicated system, I would probably keep this new IConfiguration interface for accesing the config file, and make the concrete version depend on the more general IFileSystem:

public class Configuration : IConfiguration
{
    private const string FileName = ".cruiseconfig";
    private readonly IFileSystem _fileSystem;

    public Configuration(IFileSystem fileSystem)
    {
        _fileSystem = fileSystem;
    }

    public void Write(Stream contents)
    {
        _fileSystem.WriteFile(Path.Combine(_fileSystem.Home, FileName), contents);
    }

    public Stream Read()
    {
        return _fileSystem.ReadFile(Path.Combine(_fileSystem.Home, FileName));
    }
}

For a small system this would probably be overkill, but for a much larger project, this could help provide a better seperation of responsibilities.

design,, code,, net

---