Observability Without Honeycomb

15 Mar 2020

Before I start on this, I want to make it clear that if you can buy Honeycomb, you should. Outlined below is how I started to add observability to an existing codebase which already had the ELK stack available, and was unable to use Honeycomb. My hope, in this case, is that I can demonstrate how much value observability gives, and also show how much more value you would get with an excellent tool, such as Honeycomb.

With that said, what is observability, how is it different to logging (and metrics), and why should you care?

If you already know, or would rather skip to the implementation, jump to Implementing with Elastic Search.

What is it?

The term Observability comes from systems engineering and is how you can determine the behaviour of the entire system from its outputs. In our case, this means the events we emit while processing requests. If we look at (my) hierarchy of monitoring, you can see that it starts with logging, with the next steps up being to output structured logs, then centralising those logs (no more SSHing into random hosts), and finally onto events:

logs < structured logs < centralised structured logs < events (observability)

The Problem with Logs and Metrics

With logs, you are writing out many lines as your process runs, which has a few problems, the primary being that you are often looking for data which is absent.

How many times have you been looking through many lines of logs, before realising “oh, the line about cache invalidation is missing, which means…”. It is much harder to notice data which is absent than data which is present, but with an unexpected value.

The second problem is the size of the logs saved. Logs, especially structured ones, contain a lot of useful information, such as request ids, session ids, paths, versions, host data, and anything else interesting. The majority of these fields are repeated for every log entry in the system, and that means they need to be stored and queryable at some point. Often, this is solved by deleting historical data, or sampling at write time, both of which cause data loss, and you are back to trying to notice data which isn’t there.

Metrics exhibit the data loss problem by design. Metrics are deliberately aggregated client-side and then shipped to storage. The numbers you get from metrics can be useful, but when you look at where they come from, it becomes evident that they are just a projection of logs themselves. How many codebases have you read where every loggger.Info("...", props); line is followed (or preceded) by stats.increment("some_counter)?

So What is an Event?

An Event is a structured piece of data, with as much information about the current request in it as possible. The difference is that you emit one event per request per service, if you are doing microservices. You create an event at the beginning of handling a request and send it somewhere at the end of the request (whether successful or unsuccessful).

For things like background tasks, again, emitting one event per execution, and in well-structured monoliths, one event per request per component.

This doesn’t sound like much of a difference, until you start writing your code to add interesting properties to the event, rather than log lines. We want to store as much high cardinality data as possible (so anything unique, or nearly unique), the more of it, the better, as it lets us slice and dice our events by anything at a later time (e.g. by requestid, userid, endpoint paths, url parameters, http method, etc.)

Looking at the caching example mentioned above, before we had this:

func handleRequest(request *Request) {

    now := time.Now()

    if cache[request.UserID] == nil || cache[request.UserID].IsStale(now) {
        logger.Write("Cache miss for user", request.UserID))
        stats.Increment("cache_misses")
        fillCache(cache, request.UserID)
    }

    //...

    stats.set("request_duration", time.Since(now))
}

When the user is in the cache, there is no logline written, which is fine when everything is working. However, when something unexpected happens, like daylight savings time or sudden clock drift, and suddenly all cache entries are never stale. You have a decrease in latency (which looks good), your cache_misses counter goes down (looks good), but your data is older than you realised, and bad things are happening down the line.

If you were to write this function with observability in mind, you might write something like this instead:

func handleRequest(request *Request) {

    ev := libhoney.NewEvent()
    defer ev.Send()

    now := time.Now()
    ev.Timestamp = now
    ev.Add(map[string]interface{}{
        "request_id": request.ID,
        "request_path": request.Path,
        "request_method": request.method,
        "user_id": request.UserID,
        "cache_size": len(cache),
    })

    userData, found := cache[request.UserID]
    ev.AddField("cache_hit", found)

    if !found || userData.IsStale(now) {
        userData = fillCache(ev, cache, request.UserID)
    }

    ev.AddField("cache_expires", userData.CacheUntil)
    ev.AddField("cache_is_stale", userData.IsStale(now))


    //...

    ev.AddField("request_duration_ms", time.Since(now) / time.Millisecond)
}

The resulting event will contain enough information so that in the future when a bug is introduced, you will be able to look at your events and see that yes, while request_duration_ms has gone down and cache_hit has gone up, all the events have cache_is_stale=false with cache_expires times much older than they should be.

So this is the value add of Observability: Answering Unknown Unknowns; the questions you didn’t know you needed to ask.

I won’t cover how to set up and manage the ELK stack (as my opinion is that you should pay someone else to run it. Don’t waste your engineering effort.) I will assume you have a way to get information from stdout of a process into ElasticSearch somehow (I usually use piping to Filebeat, which forwards to LogStash, which processes and pushes into ElasticSearch).

Besides, the code is the important part. This is all written in Go, but I gather you can do similar to NodeJS apps etc. We will use Honeycomb’s [Libhoney-go] package to do the heavy lifting, and supply a custom Transmission. The following is the important part of a custom stdout write (loosely based on libhoney’s WriterSender):

func (w *JsonSender) Add(ev *transmission.Event) {

    ev.Data["@timestamp"] = ev.Timestamp

    content, _ := json.Marshal(ev.Data)
    content = append(content, '\n')

    w.Lock()
    defer w.Unlock()

    w.Writer.Write(content)

    w.SendResponse(transmission.Response{
        Metadata: ev.Metadata,
    })
}

The key difference here is that I am only serialising the .Data property of the Event, and am inserting an extra @timestamp key to make my event structure conform to the standard LogStash pattern.

All that remains to do is configure libhoney to use the custom sender:

libhoney.Init(libhoney.Config{
    Transmission: &JsonSender{Writer: os.Stdout},
    Dataset:      "my-api",
})

Running your service, you would start to see json objects on stdout which look something like this:

{
    "@timestamp": "2020-03-15T14:51:43.041744363+02:00",
    "request_id": "7f46b313-0a37-457c-9727-b6fdc8c87733",
    "request_path": "/api/user/dashboard",
    "request_method": "GET",
    "user_id": "e6baf70f-9812-4cff-94e9-80a308077955",
    "cache_size": 86,
    "cache_hit": true,
    "cache_expires": "2020-03-15T15:02:17.045625680+02:00",
    "cache_is_stale": false,
    "request_duration_ms": 17
}

There are no message fields for you to read, but you can see everything which happened in this method; whether the user was found in the cache, how big the cache was etc.

Now if we push that into ElasticSearch, we can filter by any of the values in the event; in this case, I filtered by user_id and added columns for all the cache properties.

Kibana Screenshot

Now everything is in one place; you can slice and dice your data and figure out what exactly is going on. You can even write some metrics off your event queries if you want!

Improvements & Caveats

The main caveat is that pushing this into ElasticSearch is not as good as what you get from Honeycomb - It is just an improvement on logging messages and enables you to demonstrate the value of observability easily.

Once you’ve demonstrated how useful observability is, the next step is to migrate to Honeycomb and get even more value.

I have written the word Honeycomb a lot in this post (9 times so far), but I want to stress that it is observability that we are after and that Honeycomb is an implementation detail. It also happens to be the only real observability tooling (although Lightstep, kind of.)

And let’s not get started on the “3 pillars of observability” bullshit being peddled by other vendors.

Nomad Isolated Exec

29 Feb 2020

One of the many features of Nomad that I like is the ability to run things other than Docker containers. It has built-in support for Java, QEMU, and Rkt, although the latter is deprecated. Besides these inbuilt “Task Drivers” there are community maintained ones too, covering Podman, LXC, Firecraker and BSD Jails, amongst others.

The one I want to talk about today, however, is called exec. This Task Driver runs any given executable, so if you have an application which you don’t want (or can’t) put into a container, you can still schedule it with Nomad. When I run demos (particularly at conferences), I try to have everything runnable without an internet connection, which means I have to make sure all the Docker containers I wish to run are within a local Docker Registry already, and, well, sometimes I forget. By using exec, I can serve a binary off my machine with no container overheads involved.

Insecurity?

Until recently, I had always considered exec as a tradeoff: I don’t need a docker container, but I lose the isolation of the container, and the application I run has full access to everything on this host.

What I hadn’t realised, is that exec actually uses the host operating system’s isolation features via the libcontainer package to contain the application. On Linux, this means using cgroups and a chroot, making the level of isolation roughly the same as a docker container provides.

When you specify a binary to run, it must meet a few criteria:

  • An absolute path within Nomad’s chroot
  • A relative path within the Allocation Directory

For instance, to run a dotnet core application consists of invoking /usr/bin/dotnet with the relative path of the dll extracted from the artifact:

task "consumer" {
    driver = "exec"

    config {
        command = "/usr/bin/dotnet"
        args = [ "local/Consumer.dll" ]
    }

    artifact {
        source = "http://s3.internal.net/consumer-dotnet.zip"
    }
}

Whereas running a go binary can be done with a path relative to the allocation directory:

task "consumer" {
    driver = "exec"

    config {
        command = "local/consumer"
    }

    artifact {
        source = "http://s3.internal.net/consumer-go.zip"
    }
}

But what happens if we want to run a binary which is not within the default chroot environment used by exec?

Configuring The chroot Environment

By default, Nomad links the following paths into the task’s chroot:

[
    "/bin",
    "/etc",
    "/lib",
    "/lib32",
    "/lib64",
    "/run/resolvconf",
    "/sbin",
    "/usr"
]

We can configure the chroot per Nomad client, meaning we can provision nodes with different capabilities if necessary. This is done with the chroot_env setting in the client’s configuration file:

client {
  chroot_env {
    "/bin"            = "/bin"
    "/etc"            = "/etc"
    "/lib"            = "/lib"
    "/lib32"          = "/lib32"
    "/lib64"          = "/lib64"
    "/run/resolvconf" = "/run/resolvconf"
    "/sbin"           = "/sbin"
    "/usr"            = "/usr"
    "/vagrant"        = "/vagrant"
  }
}

In this case, I have added in the /vagrant path, which is useful as I usually provision a Nomad cluster using Vagrant, and thus have all my binaries etc. available in /vagrant. It means that my .nomad files for the demo have something like this for their tasks:

task "dashboard" {
    driver = "exec"

    config {
        command = "/vagrant/apps/bin/dashboard"
    }
}

Meaning I don’t need to host a Docker Registry, or HTTP server to expose my applications to the Nomad cluster.

Need Full Access?

If you need full access to the host machine, you can use the non-isolating version of exec, called raw_exec. raw_exec works in the same way as exec, but without using cgroups and chroot. As this would be a security risk, it must be enabled on each Nomad client:

client {
    enabled = true
}

plugin "raw_exec" {
    config {
        enabled = true
    }
}

Wrapping Up

One of the many reasons I like Nomad is its simplicity, especially when compared to something as big and complex as Kubernetes. Whenever I look into how Nomad works, I always seem to come away with the feeling that it has been well thought out, and how flexible it is because of this.

Being able to configure the chroot used by the Nomad clients means I can simplify my various demos further, as I can remove the need to have a webserver for an artifact source. As always, the less accidental complexity you have in your system, the better.

Consul DNS Fowarding in Alpine, revisited

30 Dec 2019

I noticed when running an Alpine based virtual machine with Consul DNS forwarding set up, that sometimes the machine couldn’t resolve *.consul domains, but not in a consistent manner. Inspecting the logs looked like the request was being made and responded to successfully, but the result was being ignored.

After a lot of googling and frustration, I was able to track down that it’s down to a difference (or optimisation) in musl libc, which glibc doesn’t do. From Musl libc’s Functional differences from glibc page, we can see under the Name Resolver/DNS section the relevant information:

Traditional resolvers, including glibc’s, make use of multiple nameserver lines in resolv.conf by trying each one in sequence and falling to the next after one times out. musl’s resolver queries them all in parallel and accepts whichever response arrives first.

The machine’s /etc/resolv.conf file has two nameserver specified:

nameserver 127.0.0.1
nameserver 192.168.121.1

The first is our Unbound instance which handles the forwarding to Consul, and the second is the DHCP set DNS server, in this case, libvirt/qemu’s dnsmasq instance.

When running in a glibc based system, queries go to the first nameserver, and then if that can’t resolve the request, it is then sent to the next nameserver, and so forth. As Alpine Linux uses muslc, it makes the requests in parallel and uses the response from whichever response comes back first.

sequence diagram, showing parallel DNS requests

When the DHCP DNS server is a network hop away, the latency involved means our resolution usually works, as the queries will hit the local DNS and get a response first. However, when the DHCP DNS is not that far away, for example when it is the DNS server that libvirt runs in the virtual network the machine is attached to, it becomes much more likely to get a response from that DNS server first, causing the failures I was seeing.

The solution to this is to change the setup so that all requests go to Unbound, which can then decide where to send them on to. This also has the additional benefits of making all DNS requests work the same on all systems; regardless of glibc or muslc being used.

sequence diagram, showing all DNS requests going through unbound

Rebuilding DNS Resolution

You can follow the same instructions in my previous Consul DNS forwarding post to setup Consul, as that is already in the right state for us.

Once Consul is up and running, it’s time to fix the rest of our pipeline.

Unbound

First, install unbound and configure it to start on boot:

apk add unbound
rc-update add unbound

The unbound config file (/etc/unbound/unbound.conf) is almost the same as the previous version, except we also have an include statement, pointing to a second config file, which we will generate shortly:

server:
 verbosity: 1
 do-not-query-localhost: no
 domain-insecure: "consul"
stub-zone:
 name: "consul"
 stub-addr: [email protected]
include: "/etc/unbound/forward.conf"

Dhclient

Next, we install dhclient so that we can make use of it’s hooks feature to generate our additional unbound config file.

apk add dhclient

Create a config file for dhclient (/etc/dhcp/dhclient.conf), which again is almost the same as the previous post, but this time doesn’t specify prepend domain-name-servers:

option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
send host-name = gethostname();
request subnet-mask, broadcast-address, time-offset, routers,
 domain-name, domain-name-servers, domain-search, host-name,
 dhcp6.name-servers, dhcp6.domain-search, dhcp6.fqdn, dhcp6.sntp-servers,
 netbios-name-servers, netbios-scope, interface-mtu,
 rfc3442-classless-static-routes, ntp-servers;

Now we can write two hooks. The first is an enter hook, which we can use to write the forward.conf file out.

touch /etc/dhclient-enter-hooks
chmod +x /etc/dhclient-enter-hooks

The content is a single statement to write the new_domain_name_servers value into a forward-zone for unbound:

#!/bin/sh

(
cat <<-EOF
forward-zone:
 name: "."
 forward-addr: ${new_domain_name_servers}
EOF
) | sudo tee /etc/unbound/forward.conf

The second hook is an exit ook, which runs after dhclient has finished writing out all the files it controls (such as /etc/resolv.conf):

touch /etc/dhclient-exit-hooks
chmod +x /etc/dhclient-exit-hooks

The content is a single sed statement to replace the address of nameserver directives written to the /etc/resolv.conf with the unbound address:

#!/bin/sh
sudo sed -i 's/nameserver.*/nameserver 127.0.0.1/g' /etc/resolv.conf

It’s worth noting; we could put the content of the enter hook into the exit hook if you would rather.

Finally, we can delete our current resolv.conf and restart the networking service:

rm /etc/resolv.conf # hack due to it dhclient making an invalid `chown` call.
rc-service networking restart

Testing

We can now test that we can resolve the three kinds of address we care about:

  • dig consul.service.consul - should return the eth0 ip of the machine
  • dig alpinetest.karhu.xyz - should be resolved by libvirt’s dnsmasq instance
  • dig example.com - should be resolved by an upstream DNS server

Conculsion

This was an interesting and somewhat annoying problem to solve, but it means I have a more robust setup in my virtual machines now. It’s interesting to note that if the DNS server from DHCP were not a local instance, the network latency added would make all the system function properly most of the time, as the local instance would answer before the remote instance could.

Libvirt Hostname Resolution

22 Dec 2019

I use Vagrant when testing new machines and experimenting locally with clusters, and since moving (mostly) to Linux, I have been using the LibVirt Plugin to create the virtual machines. Not only is it significantly faster than Hyper-V was on windows, but it also means I don’t need to use Oracle products, so it’s win-win really.

The only configuration challenge I have had with it is setting up VM hostname resolution, and as I forget how to do it each time, I figured I should write about it.

Setup

First I install the plugin so Vagrant can talk to Libvirt.

vagrant plugin install vagrant-libvirt

I also created a single vagrantfile with two virtual machines defined in it, so that I can check that the machines can resolve each other, as well as the host being able to resolve the guests.

Vagrant.configure("2") do |config|
 config.vm.box = "elastic/ubuntu-16.04-x86_64"

 config.vm.define "one" do |n1|
 n1.vm.hostname = "one"
 end

 config.vm.define "two" do |n1|
 n1.vm.hostname = "two"
 end
end

Once running vagrant up has finished (either with --provider libvirt or setting ` VAGRANT_DEFAULT_PROVIDER=libvirt`), connect to one of the machines, and try to ping the other:

[email protected]$ vagrant ssh one
[email protected]$ ping two
ping: unknown host two
[email protected]$ exit

Now that we can see they can’t resolve each other let’s move on to fixing it.

Custom Domain

The solution is to configure the libvirt network to have a domain name, and then to set the host machine to send requests for that domain to the virtual network.

First, I picked a domain. It doesn’t matter what it is, but I gather using .local will cause problems with other services, so instead, I picked $HOSTNAME.xyz, which is karhu.xyz in this case.

Vagrant-libvirt by default creates a network called vagrant-libvirt, so we can edit it to include the domain name configuration by running the following command:

virsh net-edit --network vagrant-libvirt

And adding the ` line to the xml which is displayed:

<network ipv6='yes'>
 <name>vagrant-libvirt</name>
 <uuid>d265a837-96fd-41fc-b114-d9e076462051</uuid>
 <forward mode='nat'/>
 <bridge name='virbr1' stp='on' delay='0'/>
 <mac address='52:54:00:a0:ae:fd'/>
+ <domain name='karhu.xyz' localOnly='yes'/>
 <ip address='192.168.121.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.168.121.1' end='192.168.121.254'/>
 </dhcp>
 </ip>
</network>

To make the changes take effect, we need to destroy and re-create the network, so first I destroy the vagrant machines, then destroy and restart the network:

vagrant destroy -f
virsh net-destroy --network vagrant-libvirt
virsh net-start --network vagrant-libvirt

Finally, we can re-create the machines, and log in to one to check that they can resolve each other:

[email protected]$ vagrant up
[email protected]$ vagrant ssh one
[email protected]$ ping two
PING two.karhu.xyz (192.168.121.243) 56(84) bytes of data.
[email protected]$ exit

You can also check that the host can resolve the machine names when querying the virtual network’s DNS server:

[email protected]$ dig @192.168.121.1 +short one
> 192.168.121.50

Host DNS Forwarding

The host cant talk to the machines by name still, so we need to tweak the host’s DNS, which means fighting with SystemD. Luckily, we only need to forward requests to a DNS server running on port 53 - if it was on another port then replacing systemd-resolved like my post on Consul DNS forwarding would be necessary.

Edit /etc/systemd/resolved.conf on the host, to add two lines which instruct it to send DNS requests for the domain picked earlier to the DNS server run by libvirt (dnsmasq):

[Resolve]
-#DNS=
+DNS=192.168.121.1
#FallbackDNS=
-#Domains=
+Domains=~karhu.xyz
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#DNSOverTLS=no
#Cache=yes
#DNSStubListener=yes
#ReadEtcHosts=yes

Lastly, restart systemd-resolved for the changes to take effect:

systemctl restart systemd-resolved

Now we can resolve the guest machines by hostname at the domain we picked earlier:

[email protected]$ ping one.karhu.xyz
PING one.karhu.xyz (192.168.121.50) 56(84) bytes of data.

Done!

Nomad Good, Kubernetes Bad

21 Nov 2019

I will update this post as I learn more (both positive and negative), and is here to be linked to when people ask me why I don’t like Kubernetes, and why I would pick Nomad in most situations if I chose to use an orchestrator at all.

TLDR: I don’t like complexity, and Kubernetes has more complexity than benefits.

Operational Complexity

Operating Nomad is very straight forward. There are very few moving parts, so the number of things which can go wrong is significantly reduced. No external dependencies are required to run it, and there is only one binary to use. You run 3-5 copies in Server mode to manage the cluster and as many as you want running in Client mode to do the actual work. You can add Consul if you want service discovery, but it’s optional. More on that later.

Compare this to operating a Kubernetes cluster. There are multiple Kubernetes orchestration projects, tools, and companies to get clusters up and running, which should be an indication of the level of complexity involved. Once you have the cluster set up, you need to keep it running. There are so many moving parts (Controller Manager, Scheduler, API Server, Etcd, Kubelets) that it quickly becomes a full-time job to keep the cluster up and running. Use a cloud service to run Kubernetes, and if you must use your own infrastructure, pay someone else to manage it. It’s cheaper in the long run. Trust me.

Deployment

Nomad, being a single binary, is easy to deploy. If you want to use Terraform to create a cluster, Hashicorp provides modules for both AWS and Azure. Alternatively, you can do everything yourself, as it’s just keeping one binary running on hosts, and a bit of network/DNS config to get them talking to each other.

By comparison, Kubernetes has a multitude of tools to help you deploy a cluster. Still, while it gives you a lot of flexibility in choice, you also have to hope that the tool continues to exist and that there is enough community/company/documentation about that specific tool to help you when something goes wrong.

Upgrading The Cluster

Upgrading Nomad involves doing a rolling deployment of the Servers and Clients. If you are using the Hashicorp Terraform module, you re-apply the module with the new AMI ID to use, and then delete nodes (gracefully!) from the cluster and let the AutoScaleGroup take care of bringing new nodes up. If you need to revert to an older version of Nomad, you follow the same process.

When it comes to Kubernetes, please pay someone else to do it. It’s not a fun process. The process will differ depending on which cluster management tool you are using, and you also need to think about updates to etcd and managing state in the process. There is a nice long document on how to upgrade etcd.

Debugging a Cluster

As mentioned earlier, Nomad has a small number of moving parts. There are three ports involved (HTTP, RPC and Gossip), so as long as those ports are open and reachable, Nomad should be operable. Then you need to keep the Nomad agents alive. That’s pretty much it.

Where to start for Kubernetes? As many Kubernetes Failure Stories point out: it’s always DNS. Or etcd. Or Istio. Or networking. Or Kubelets. Or all of these.

Local Development

To run Nomad locally, you use the same binary as the production clusters, but in dev mode: nomad agent -dev. To get a local cluster, you can spin up some Vagrant boxes instead. I use my Hashibox Vagrant box to do this when I do conference talks and don’t trust the wifi to work.

To run Kubernetes locally to test things, you need to install/deploy MiniKube, K3S, etc. The downside to this approach is that the environment is significantly different to your real Kubernetes cluster, and you can end up where a deployment works in one, but not the other, which makes debugging issues much harder.

Features & Choice

Nomad is relatively light on built-in features, which allows you the choice of what features to add, and what implementations of the features to use. For example, it is pretty popular to use Consul for service discovery, but if you would rather use Eureka, or Zookeeper, or even etcd, that is fine, but you lose out on the seamless integration with Nomad that other Hashicorp tools have. Nomad also supports Plugins if you want to add support for your favourite tool.

By comparison, Kubernetes does everything, but like the phrase “Jack of all trades, master of none”, often you will have to supplement the inbuilt features. The downside to this is that you can’t switch off Kubernetes features you are not using, or don’t want. So if you add Vault for secret management, the Kubernetes Secrets are still available, and you have to be careful that people don’t use them accidentally. The same goes for all other features, such as Load Balancing, Feature Toggles, Service Discovery, DNS, etc.

Secret Management

Nomad doesn’t provide a Secret Management solution out of the box, but it does have seamless Vault integration, and you are also free to use any other Secrets As A Service tool you like. If you do choose Vault, you can either use it directly from your tasks or use Nomad’s integration to provide the secrets to your application. It can even send a signal (e.g. SIGINT etc.) to your process when the secrets need re-reading.

Kubernetes, on the other hand, provides “Secrets”. I put the word “secrets” in quotes because they are not secrets at all. The values are stored encoded in base64 in etcd, so anyone who has access to the etcd cluster has access to all the secrets. The official documentation suggests making sure only administrators have access to the etcd cluster to solve this. Oh, and if you can deploy a container to the same namespace as a secret, you can reveal it by writing it to stdout.

Kubernetes secrets are not secret, just “slightly obscured.”

If you want real Secrets, you will almost certainly use Vault. You can either run it inside or outside of Kubernetes, and either use it directly from containers via it’s HTTPS API or use it to populate Kubernetes Secrets. I’d avoid populating Kubernetes Secrets if I were you.

Support

If Nomad breaks, you can either use community support or if you are using the Enterprise version, you have Hashicorp’s support.

When Kubernetes breaks, you can either use community support or find and buy support from a Kubernetes management company.

The main difference here is “when Kubernetes breaks” vs “if Nomad breaks”. The level of complexity in Kubernetes makes it far more likely to break, and that much harder to debug.