Locking Vault Down with Policies

23 Jun 2018

The final part of my Vault miniseries focuses on permissioning, which is provided by Vault’s Policies.

As everything in Vault is represented as a path, the policies DSL (Domain Specific Language) just needs to apply permissions to paths to lock things down. For example, to allow all operations on the cubbyhole secret engine, we would define this policy:

path "cubbyhole/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
}

Vault comes with a default policy which allows token operations (such as looking up its own token info, releasing and renewing tokens), and cubbyhole access.

Let’s combine the last two posts (Managing Postgres Connection Strings with Vault and Secure Communication with Vault) and create a Policy which will allow the use of generated database credentials. If you want more details on the how/why of the set up phase, see those two posts.

Setup

First, we’ll create two containers which will get removed on exit - a Postgres one and a Vault one. Vault is being started in dev mode, so we don’t need to worry about init and unsealing it.

docker run --rm -d -p 5432:5432 -e 'POSTGRES_PASSWORD=postgres' postgres:alpine
docker run --rm -d -p 8200:8200 --cap-add=IPC_LOCK -e VAULT_DEV_ROOT_TOKEN_ID=vault vault

Next, we’ll create our Postgres user account which Vault will use to create temporary credentials:

psql --username postgres --dbname postgres
psql> create role VaultAdmin with Login password 'vault' CreateRole;
psql> grant connect on database postgres to vaultadmin;

Let’s also configure the environment to talk to Vault as an administrator, and enable the two Vault plugins we’ll need:

export VAULT_ADDR="http://localhost:8200"
export VAULT_TOKEN="vault"

vault auth enable approle
vault secrets enable database

We’ll also set up our database secret engine, and configure database roll creation:

vault write database/config/postgres_demo \
    plugin_name=postgresql-database-plugin \
    allowed_roles="default" \
    connection_url="postgresql://:@10.0.75.1:5432/postgres?sslmode=disable" \
    username="VaultAdmin" \
    password="vault"

vault write database/roles/reader \
    db_name=postgres_demo \
    creation_statements="CREATE ROLE \"\" WITH LOGIN PASSWORD '' VALID UNTIL ''; \
        GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"\";" \
    default_ttl="10m" \
    max_ttl="1h"

Creating a Policy

First, we need to create the policy. This can be supplied inline on the command line, but reading from a file means it can be source-controlled, and you something readable too!

While the filename doesn’t need to match the policy name, it helps make it a bit clearer if it does match, so we’ll call this file postgres-connector.hcl.

# vault read database/creds/reader
path "database/creds/reader" {
    capabilities = ["read"]
}

We can then register this policy into Vault. The write documentation indicates that you need to prefix the file path with @, but that doesn’t work for me:

vault policy write postgres-connector postgres-connector.hcl

Setup AppRoles

As before, we’ll create a demo_app role for our application to use to get a token. However this time, we’ll specify the policies field, and pass it in both default and our custom postgres-connector role.

vault write auth/approle/role/demo_app \
    policies="postgres-connector,default"

When we generate our client token using the secret_id and role_id, we’ll get a token which can create database credentials, as well as access the cubbyhole.

The final part of being an admin user for this is to generate and save the secret_id and role_id:

vault write -f -field=secret_id auth/approle/role/demo_app/secret-id
vault read -field=role_id auth/approle/role/demo_app/role-id

Creating a Token and Accessing the Database

Opening a new command line window, we need to generate our client token. Take the two id’s output from the admin window, and use them in the following code block:

export VAULT_ADDR="http://localhost:8200"
SECRET_ID="" # from the 'admin' window!
ROLE_ID="" # from the 'admin' window!

export VAULT_TOKEN=$(curl -X POST --data "{ \"role_id\":\"$ROLE_ID\", \"secret_id\":\"$SECRET_ID\" }" $VAULT_ADDR/v1/auth/approle/login | jq  -r .auth.client_token)

Now we have a client token, we can generate a database connection:

vault read database/creds/reader
# Key                Value
# ---                -----
# lease_id           database/creds/reader/dc2ae2b6-c709-0e2f-49a6-36b45aa84490
# lease_duration     10m
# lease_renewable    true
# password           A1a-1kAiN0gqU07BE39N
# username           v-approle-reader-incldNFPhixc1Kj25Rar-1529764057

Which can also be renewed:

vault lease renew database/creds/reader/dc2ae2b6-c709-0e2f-49a6-36b45aa84490
# Key                Value
# ---                -----
# lease_id           database/creds/reader/dc2ae2b6-c709-0e2f-49a6-36b45aa84490
# lease_duration     10m
# lease_renewable    true

However, if we try to write to the database roles, we get an error:

vault write database/roles/what dbname=postgres_demo
# Error writing data to database/roles/what: Error making API request.
#
# URL: PUT http://localhost:8200/v1/database/roles/what
# Code: 403. Errors:
#
# * permission denied

Summary

It is also a good idea to have separate fine-grained policies, which can then be grouped up against separate AppRoles, allowing each AppRole to have just the permissions it needs. For example, you could have the following Policies:

  • postgres-connection
  • postgres-admin
  • rabbitmq-connection
  • kafka-consumer

You would then have several AppRoles defined which could use different Policies:

  • App1: rabbitmq-connection, postgres-connection
  • App2: kafka-consumer, rabbitmq-connection
  • App3: postgres-admin

Which helps encourage you to have separate AppRoles for each of your applications!

Finally, the Vault website has a guide on how to do this too…which I only found after writing this! At least what I wrote seems to match up with their guide pretty well, other than I also use AppRole authentication (and so should you!)

vault, security, microservices

---

Secure Communication with Vault

22 Jun 2018

I think Vault by Hashicorp is a great product - I particularly love how you can do dynamic secret generation (e.g for database connections). But how do you validate that the application requesting the secret is allowed to perform that action? How do you know it’s not someone or something impersonating your application?

While musing this at an airport the other day, my colleague Patrik sent me a link to a StackOverflow post about this very question

The summary is this:

  1. Use an AppRole rather than a plain token
  2. Bake the RoleID into your application
  3. Provide a SecretID from the environment
  4. Combine both to get a token from Vault on startup
  5. Periodically renew said token.

Or, in picture form:

vault token flow

So let’s see how we can go about doing this.

0. Setup Vault

This time we will use Vault in dev mode, which means that it starts unsealed, and we can specify the root token as something simple. On the downside, there is no persistence; restarting the container gives you a blank slate. If you would prefer to use Vault with persistent storage, see Section 2 of the previous post:

docker run \
    -d --rm \
    --name vault_demo \
    --cap-add=IPC_LOCK \
    -e VAULT_DEV_ROOT_TOKEN_ID=vault \
    -p 8200:8200 \
    vault

As in the previous article, we’ll export the VAULT_TOKEN and VAULT_ADDR variables so we can use the Vault CLI:

export VAULT_ADDR="http://localhost:8200"
export VAULT_TOKEN="vault"

For our last setup step, we need to enable the AppRole auth method:

vault auth enable approle

1. Create A Role

Creating a role has many parameters you can specify, but for our demo_app role, we are going to skip most of them, just providing token_ttl and token_max_ttl.

vault write auth/approle/role/demo_app \
    token_ttl=20m \
    token_max_ttl=1h

2. Request A Secret ID

Vault has two modes of working, called Push and Pull. Push mode is when you generate the secret_id yourself and store it against the role. Pull mode is when you request Vault to generate the secret_id against the role and return it to you. I favour the Pull model, as it is one less thing to worry about (how to generate a secure secret_id.)

We have to specify the -force (shorthand -f) as we are writing a secret which has no key-value pairs, and as we are using the CLI, I have specified -field=secret_id which changes the command to only output the secret_id’s value, rather than the whole object.

export SECRET_ID=$(vault write -f -field=secret_id auth/approle/role/demo_app/secret-id)

echo $SECRET_ID
#> 119439b3-4eec-5e5b-ce85-c1d00f046234

3. Write Secret ID to Environment

This step would be done by another process, such as Terraform when provisioning your environment, or Spinnaker when deploying your containers.

As we are just using the CLI, we can pretend that $SECRET_ID represents the value stored in the environment.

4. Fetch Role ID

Next, assuming the role of the developer writing an app, we need fetch the role_id, for our demo_app role. As with fetching the secret_id, we specify the -field=role_id so we only get that part of the response printed:

vault read -field=role_id auth/approle/role/demo_app/role-id
#> 723d66af-3ddd-91c0-7b35-1ee51a30c5b8

5. Embed Role ID in Code

We’re on the CLI, and have saved the role_id into the $ROLE_ID variable, so nothing more to do here!

Let’s create a simple C# Console app to demo this with:

dotnet new console --name VaultDemo
dotnet new sln --name VaultDemo
dotnet sln add VaultDemo/VaultDemo.csproj
dotnet add VaultDemo/VaultDemo.csproj package VaultSharp

We also installed the VaultSharp NuGet package, which takes care of doing the client token fetching for you - but we will go through what this is doing internally later!

class Program
{
  private const string RoleID = "723d66af-3ddd-91c0-7b35-1ee51a30c5b8";

  static async Task Main(string[] args)
  {
    var auth = new AppRoleAuthenticationInfo(
      RoleID,
      Environment.GetEnvironmentVariable("SECRET_ID")
    );

    var client = VaultClientFactory.CreateVaultClient(
      new Uri("http://localhost:8200"),
      auth
    );

    await client.CubbyholeWriteSecretAsync("test/path", new Dictionary<string, object>
    {
      { "Name", "I'm a secret Name!" }
    });

    var secrets = await client.CubbyholeReadSecretAsync("test/path");
    Console.WriteLine(secrets.Data["Name"]);
  }
}

6. Deploy!

As we’re running locally, nothing to do here, but if you want, imagine that you created a docker container or baked an AMI and deployed it to the cloud or something!

7. Run / On Start

As we’ve already saved the SECRET_ID into an environment variable, we can just run the application:

dotnet run --project VaultDemo/VaultDemo.csproj
#> I'm a secret Name!

So what did the application do?

When run, the application used both the role_id from the constant and the secret_id environment variable to call Vault’s Login method. An equivalent curl command would be this:

curl -X POST \
    --data '{ "role_id":"723d66af-3ddd-91c0-7b35-1ee51a30c5b8", "secret_id":"119439b3-4eec-5e5b-ce85-c1d00f046234" }' \
    http://localhost:8200/v1/auth/approle/login

This will spit out a single line of json, but if you have jq in your path, you can prettify the output by appending | jq .:

{
  "request_id": "37c0e057-6fab-1873-3ec0-affaace26e76",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "c14f5806-aff2-61b6-42c2-8920c8049b6c",
    "accessor": "aef3d4f4-d279-bcda-8d9c-2a3de6344975",
    "policies": [
      "default"
    ],
    "metadata": {
      "role_name": "demo_app"
    },
    "lease_duration": 1200,
    "renewable": true,
    "entity_id": "34b1094b-28d4-1fb0-b8f6-73ad28d80332"
  }
}

The line we care about is client_token in the auth section. The value is used to authenticate subsequent requests to Vault.

For instance, in the C# app we used the CubbyHole backend to store a Name. The equivalent curl commands would be:

export VAULT_TOKEN="c14f5806-aff2-61b6-42c2-8920c8049b6c"

# vault write cubbyhole/test/path name="Another manual secret"
curl -X POST \
    --header "X-Vault-Token: $VAULT_TOKEN" \
    --data '{ "Name": "Another manual secret" }' \
    http://localhost:8200/v1/cubbyhole/test/path

# vault list cubbyhole/test/path
curl -X GET \
    --header "X-Vault-Token: $VAULT_TOKEN" \
    http://localhost:8200/v1/cubbyhole/test/path

So why use the client library if it’s just HTTP calls? Simple - by using VaultSharp (or equivalent) we get token auto renewal handled for us, along with working APIs; no more guessing and head-scratching while trying to work out the proper HTTP call to make!

What Next?

Read up on what you can do with Roles - such as limiting token and secret lifetimes, usage counts, etc.

Next article will probably cover Vault’s Policies.

vault, security, microservices

---

Fixing Docker volume paths on Git Bash on Windows

18 Jun 2018

My normal development laptop runs Windows, but like a lot of developers, I make huge use of Docker, which I run under Hyper-V. I also heavily use the git bash terminal on windows to work.

Usually, everything works as expected, but I was recently trying to run an ELK (Elasticsearch, Logstash, Kibana) container, and needed to pass in an extra configuration file for Logstash. This caused me a lot of trouble, as nothing was working as expected.

The command I was running is as follows:

docker run \
    -d --rm \
    --name elk_temp \
    -p 5044:5044 \
    -p 5601:5601 \
    -p 9200:9200 \
    -v logstash/app.conf:/etc/logstash/conf.d/app.conf \
    sebp/elk

But this has the interesting effect of mounting the app.conf in the container as a directory (which is empty), rather than doing the useful thing of mounting it as a file. Hmm. I realised it was git bash doing path transformations to the windows style causing the issue, but all the work arounds I tried failed:

# single quotes
docker run ... -v 'logstash/app.conf:/etc/logstash/conf.d/app.conf'
# absolute path
docker run ... -v /d/dev/temp/logstash/app.conf:/etc/logstash/conf.d/app.conf
# absolute path with // prefix
docker run ... -v //d/dev/temp/logstash/app.conf:/etc/logstash/conf.d/app.conf

In the end, I found a way to switch off MSYS’s (what git bash is based on) path conversion:

MSYS_NO_PATHCONV=1 docker run \
    -d --rm \
    --name elk_temp \
    -p 5044:5044 \
    -p 5601:5601 \
    -p 9200:9200 \
    -v logstash/app.conf:/etc/logstash/conf.d/app.conf \
    sebp/elk

And Voila, the paths get passed through correctly, and I can go back to hacking away at Logstash!

git, docker, bash, windows

---

Managing Postgres Connection Strings with Vault

17 Jun 2018

One of the points I made in my recent NDC talk on 12 Factor microservices, was that you shouldn’t be storing sensitive data, such as API keys, usernames, passwords etc. in the environment variables.

Don’t Store Sensitive Data in the Environment

My reasoning is that when you were accessing Environment Variables in Heroku’s platform, you were actually accessing some (probably) secure key-value store, rather than actual environment variables.

While you can use something like Consul’s key-value store for this, it’s not much better as it still stores all the values in plaintext, and has no auditing or logging.

Enter Vault

Vault is a secure secret management application, which not only can store static values, but also generate credentials on the fly, and automatically expire them after usage or after a time period. We’re going to look at setting up Vault to generate Postgres connection strings.

What you’ll need

  1. Docker, as we’ll be running both Vault and Postgres in containers
  2. A SQL client (for a GUI, I recommend DBeaver, for CLI PSQL included in the Postgres download is fine.)
  3. The Vault executable

What we’ll do

  1. Setup Postgres and create a SQL user for Vault to use
  2. Setup Vault
  3. Setup Vault’s database functionality
  4. Fetch and renew credentials from Vault.

1. Setup a Postgres container

When running on my local machine, I like to use the Alpine variant of the official Postgres container, as it’s pretty small, and does everything I’ve needed so far.

We’ll run a copy of the image, configure it to listen on the default port, and use the super secure password of postgres:

docker run \
  -d \
  --name postgres_demo \
  -p 5432:5432 \
  -e 'POSTGRES_PASSWORD=postgres' \
  postgres:alpine

Next up, we need to create a user for Vault to use when generating credentials. You can execute this SQL in any SQL editor which can connect to postgres, or use the PSQL command line interface:

psql --username postgres --dbname postgres   # it will prompt for password
psql> create role VaultAdmin with Login password 'vault' CreateRole;
psql> grant connect on database postgres to vaultadmin;

You can verify this has worked by running another instance of psql as the new user:

psql --username VaultAdmin --dbname postgres   # it will prompt for password

2. Setting up the Vault container

The official Vault container image will by default run in dev mode, which means it will startup unsealed, and will use whatever token you specify for authentication. However, it won’t persist any information across container restarts, which is a bit irritating, so instead, we will run it in server mode, and configure file storage to give us (semi) persistent storage.

The configuration, when written out and appropriately formatted, looks as follows:

backend "file" {
    path = "/vault/file"
}
listener "tcp" {
    address = "0.0.0.0:8200"
    tls_disable = 1
}
ui = true

We are binding the listener to all interfaces on the container, disabling SSL (don’t do this in production environments!) and enabling the UI. To pass this through to the container, we can set the VAULT_LOCAL_CONFIG environment variable:

docker run \
    -d \
    --name vault_demo \
    --cap-add=IPC_LOCK \
    -p 8200:8200 \
    -e 'VAULT_LOCAL_CONFIG=backend "file" { path = "/vault/file" } listener "tcp" { address = "0.0.0.0:8200" tls_disable = 1 } ui = true' \
    vault server

When we use the Vault CLI to interact with a Vault server, it want’s to use TLS, but as we are running without TLS, we need to override this default. Luckily it’s just a case of setting the VAULT_ADDR environment variable:

export VAULT_ADDR="http://localhost:8200"

You can run vault status to check you can communicate with the container successfully.

Before we can start configuring secret engines in Vault, it needs initialising. By default, the init command will generate five key shares, of which you will need any three to unseal Vault. The reason for Key Shares is so that you can distribute the keys to different people so that no one person has access to unseal Vault on their own. While this is great for production, for experimenting locally, one key is enough.

vault operator init -key-shares=1 -key-threshold=1

The output will amongst other things give you two lines, one with the Unseal Key, and one with the Initial Root Token:

Unseal Key 1: sk+C4xJihsMaa+DCBHHgoGVozz+dMC4Kd/ijX8oMcrQ= Initial Root Token: addaaeed-d387-5eab-128d-60d6e92b0757

We’ll need the Unseal key to unseal Vault so we can configure it and generate secrets, and the Root Token so we can authenticate with Vault itself.

 vault operator unseal "sk+C4xJihsMaa+DCBHHgoGVozz+dMC4Kd/ijX8oMcrQ="

To make life a bit easier, we can also set an environment variable with our token so that we don’t have to specify it on all the subsequent requests:

export VAULT_TOKEN="addaaeed-d387-5eab-128d-60d6e92b0757"

3. Configure Vault’s Database Secret Engine

First off we need to enable the database secret engine. This engine supports many different databases, such as Postgres, MSSQL, Mysql, MongoDB and Cassandra amongst others.

vault secrets enable database

Next, we need to configure how vault will connect to the database. You will need to substitute the IPAddress in the connection string for your docker host IP (in my case, the network is called DockerNAT, and my machine’s IP is 10.0.75.1, yours will probably be different.)

vault write database/config/postgres_demo \
    plugin_name=postgresql-database-plugin \
    allowed_roles="*" \
    connection_url="postgresql://:@10.0.75.1:5432/postgres?sslmode=disable" \
    username="VaultAdmin" \
    password="vault"

To explain more of the command: We can limit what roles can be granted by this database backend by specifying a CSV of roles (which we will define next). In our case, however, we are using the allow anything wildcard (*).

Next, we need to define a role which our applications can request. In this case, I am creating a role which only allows reading of data, so it’s named reader. We also specify the default_ttl which controls how long the user is valid for, and the max_ttl which specifies for how long we can renew a user’s lease.

vault write database/roles/reader \
    db_name=postgres_demo \
    creation_statements="CREATE ROLE \"\" WITH LOGIN PASSWORD '' VALID UNTIL ''; \
        GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"\";" \
    default_ttl="10m" \
    max_ttl="1h"
vault read database/creds/reader
> Key                Value
> ---                -----
> lease_id           database/creds/reader/15cf95eb-a2eb-c5ba-5111-8c0c48ae30a6
> lease_duration     10m
> lease_renewable    true
> password           A1a-3gkMQpmoh3gbj2aM
> username           v-root-reader-tgl6FSXHZaC5LZOK4q0u-1529138525

We can now use the username and password to connect to postgres, but only for 10 minutes, after which, the user will be deleted (Note, Vault sets the expiry of the user in Postgres, but will also remove the user when it expires.)

Verify the user can connect using PSQL again:

psql --username v-root-reader-tgl6FSXHZaC5LZOK4q0u-1529138525 --dbname postgres

If we want to keep using our credentials, we can run the renew command passing in the lease_id, which will increase the current lease timeout by the value of default_ttl. You can provide the -increment value to request a different duration extension in seconds, but you cannot go further than the max_ttl.

vault lease renew database/creds/reader/15cf95eb-a2eb-c5ba-5111-8c0c48ae30a6
# or
vault lease renew database/creds/reader/15cf95eb-a2eb-c5ba-5111-8c0c48ae30a6 -increment 360

Done!

There are a lot more options and things you can do with Vault, but hopefully, this will give you an idea of how to start out.

vault, security, microservices, postgres

---

Writing Conference Talks

15 May 2018

I saw an interesting question on twitter today:

Hey, people who talk at things: How long does it take you to put a new talk together?

I need like 50 hours over at least a couple of months to make something I don’t hate. I’m trying to get that down (maybe by not doing pictures?) but wondering what’s normal for everyone else.

Source

I don’t know how long it takes me to write a talk - as it is usually spread over many weeks/months, worked on as and when I have inspiration. The actual processes is something like this:

  1. Think it through

    The start of this is usually with an idea for a subject I like a lot, such as Strong Typing, Feature Toggles, or Trunk Based Development. Where I live I walk everywhere (around 15k to 20k steps per day), which gives me a lot of time to think about things.

  2. Giant markdown file of bullet points which I might want to cover

    I write down all the points that I want to talk about into one markdown file, which I add to over time. I use the github checkbox markdown format (* [ ] some point or other) so I can tick thinks off later.

  3. Rough order of points at the bottom

    At the bottom of this notes file, I start writing an order of things, just to get a sense of flow. Once this order gets comfortable enough, I stop updating it and start using the real slides file.

  4. Start slides writing sections as I feel like it

    I start with the title slide, and finding a suitable large image for it. This takes way longer than you might imagine! For the rest of the slides, I use a mix of titles, text and hand drawn images.

    I use OneNote and Gimp to do the hand drawn parts, and usually the Google Cloud Platform Icons, as they’re the best looking (sorry Amazon!)

    Attribute all the images as you go. Much easier than trying to do it later.

  5. Re-order it all!

    I talk bits of the presentation through in my head, and shuffle bits around as I see fit. This happens a lot as I write the slides.

  6. Talk it through to a wall

    My wall gets me talking to it a lot. I talk it through outloud, and make note of stumbling points, and how long the talk takes, adding speaker notes if needed.

  7. Tweaks and re-ordering

    I usually end up making last minute tweaks and order switches as I figure out how to make something flow better. I am still not happy with some transitions in my best talks yet!

I write all my talks using RevealJS, mostly because I can write my slides as a markdown file and have it rendered in the browser, and partly because I’ve always used it.

To get things like the Speaker Notes view working, you need to be running from a webserver (rather than just from an html file on your filesystem.) For this I use NWS, which is a static webserver for your current working directory (e.g. cd /d/dev/presentations && nws).

Currently, I am trying to work out if I can use Jekyll or Hugo to generate the repository for me, as all the presentations have the same content, other than images, slides file, and a customise.css file. Still not sure on how best to achieve what I am after though.

You can see the source for all my talks in my Presentations Repository on github. The actual slides can be seen on my website here, and the videos (if available), I link to here.

productivity, talks, writing

---