Ben Smithgall

Welcome to the web blog

Bit pusher at Spotify. Previously Interactive News at the New York Times, U.S. Digital Service, and Code for America.

Home

Blog

Books

Projects

Self-hosting Obsidian LiveSync on Kubernetes with Tailscale

June 17, 2024

Recently, I’ve been using Obsidian for note taking. It’s fast, has a bunch of nice features, a decent vim mode, and rich support for community plugins. I also really like the “file over app” philosophy that the current CEO has written about.

I also have been running Kubernetes on a cluster of RaspberryPi computers in my house that I have been using for all sorts of odds and ends. It hosts my chess tournament website that I have written about previously. I also use it as a sync server for the incredible atuin, host a handful of internal-to-the-house apps, and some other things. The cluster runs k3s and uses four Pi computers that I have accumulated over the years, going as far back as model 2B.

One thing I like quite a bit about it is that nothing is exposed to the public internet, except the aforementioned tournament website, which is hidden behind a Cloudflare tunnel. Instead, I use a combination of external-mdns and Tailscale to make things accessible outside the cluster as needed.

Obsidian has a plugin for self-hosted LiveSync, which I really appreciate. I like to host these things myself when possible, especially when the cost of failure is low, like it is here. I had some “fun” getting this set up with my setup and wanted to write about wiring pieces together.

CouchDB

I know absolutely nothing about CouchDB. In fact, I hadn’t heard of it before learning that the plugin in based on it, but I did learn that it is written in Erlang, which is great. So to start off, I had to get CouchDB running in kubernetes.

CouchDB has a helm chart, but I opted to write my own deployment instead. In general, I prefer to have my own yaml lying around so I can put it in source control and make tweaks later.

In any case, the deployment is pretty straightforward. However, there are a couple of gotchas when setting this up from scratch, especially since the documentation is quite sparse.

First, you need to have an admin user and password ready to go via environment variables. I created them as a kubernetes secret and referenced them via secretKeyRef.

Second, after everything has started, you need to go into the running pod and manually create the required databases. This is discussed in the setup documentation, but there are a few additional caveats:

  1. The curl commands in the setup documentation will not work as written because CouchDB will only allow you to run them if you are an administrator. You need to sign your requests. I used the _session auth method and managed to create the required databases.
  2. Setting up CouchDB to operate as a single-node installation isn’t as straightforward as I had hoped. The recommended way to do it is to publish your own Docker image that has a local.ini file bundled inside of it, with the single_node=true flag set in the CouchDB property. This is probably the best way to do it, but I just configured a service and local mdns ingress and used the management GUI to do it manually.

Tailscale

Now that we’ve got CouchDB set up, we need to have some way to communicate with it from Obsidian. One thing that is quite important is that we are able to do so via https; without it, I wouldn’t be able to sync to and from my phone. Tailscale is an amazing piece of technology. It wraps around WireGuard and can make a mesh network out of any number of devices. My pi cluster, computer, and phone are all part of the same Tailscale mesh network, so I can do all sorts of cool stuff like send files between devices or ssh from anywhere without having machines exposed to the public internet.

One relatively recent feature added to Tailscale was the kubernetes operator. This does a couple of nice things for us, like being able to expose a service directly as a “machine” in your Tailscale mesh network (aka tailnet), or even an entire subnet router. I have found that selectively exposing services has been the most straightforward since it can be done just with annotations.

Unlike with CouchDB, I used the helm chart for this one. The kubernetes setup is quite complicated with a lot of moving parts (service accounts, roles and rolebindings, custom resource definitions, etc.). I had tried using the raw yaml directly for awhile, but it’s still somewhat early and things have moved around a bit since I started trying it out.

In any case, the documentation for the simple case is pretty straightforward from Tailscale’s website. There are some additional steps needed in order to get HTTPS set up for a tailnet. The main thing for us is that in order to use HTTPS, we have to know the full tailnet name. This is important when actually setting up the LiveSync, since the tailscale cert command will automatically generate a certificate with the full name and https requests will work as expected.

Additionally, annotating a kubernetes service wasn’t sufficient to generate a certificate. In order to get HTTPS traffic working, I had to use a dedicated ingress, as is outlined in Tailscale documentation. I also had to manually ssh into the pod running tailscale for the given ingress and run tailscale cert manually in order to generate a certificate. My guess is that I had spent too much time mucking around, and something had gone sideways.

Obsidian LiveSync

At this point, there’s a CouchDB instance running in our cluster that is accessible over HTTPS to all nodes in our tailnet, so we are ready to set up the LiveSync plugin. One really nice thing about the plugin is that once you connect it to the CouchDB instance, it can automatically configure the database to have the appropriate configuration and CORS settings etc.

At this point, everything should be set up and sync can be enabled. The URI will be the full name given to our CouchDB ingress. There’s a couple of other really things about this setup:

  • Each vault goes to its own CouchDB database, which means that it is possible to sync multiple vaults simultaneously through the same instance
  • “File over app” mentality means that the files are distributed onto each device without being dependent on the database directly. If something goes sideways with CouchDB, it just be deleted and rebuilt from the Obsidian vault directly.