Manage Kubernetes clusters from OpenBSD

2020-05-08

This should work with OpenBSD 6.7. I write this while the source tree is locked for release, so even if I use -current this is as close as -current gets to -release

Update 2020-06-05: we now have a port for kubectl. So, at least in -current things get a bit easier.

Intro

Some of us have to suffer the pain of the trendy tech and the buzzwords even when it does not provide much benefit. But hey ! we have to be cool kids playing with cool tech, right ?

Nowadays, its containers all the way down. As I like to say, this solves some problems and brings others, but I digress and this can become a rant quicker than you think.

In this article I want to talk about how I do for manage work infrastructure (all cloudy and containery) from the comfort of my OpenBSD-current workstation.

Objective

Before I tried all this I had a Linux VM running on vmd(8) so I could have all the command line tools to work with Google Cloud Platform (from now on gcp) and Google Kubernetes Engine (from now on gke), which are the cloudy and containery providers we use at work.

My goal was to have all the needed tools working on OpenBSD so I do not have to fire up the VM, and avoid the hassle of moving YAML files around.

In my case I need those cli tools:

Luckily, there's a port for Google Cloud SDK, and the others are written in Go, and can be compiled for OpenBSD (with some tweaks).

Google Cloud SDK

This is not the most used tool for me, but is essential as it provides authentication for all the others. As I said, there's a port for it, so install it is as simple as:

$ doas pkg_add google-cloud-sdk

After that one needs to log in. Execute this command and follow the instructions:

$ gcloud init

More info here

If you manage more than one Google Cloud Project (as I do), the configuration files are placed on ~/.config/gcloud/configurations/.

You'll see there's a config_default file. You can copy that to config_whatever and edit the file (it's in ini format) to fit your needs. Later on you can change projects with:

$ gcloud config configurations activate whatever

kubectl

There's no port for kubectl (yet, if you want to step in, I promise to test it, give feedback and maybe even commit it !), on 6.7 but it can be compiled and installed manually. We have a port now on -current thanks to Karlis Mikelsons and @kn.

I assume that you have a Go environment working.

At first I tried to go the easy route, as some devs (abieber@ and kn@) told me that that it was working, maybe this does the trick for you:

$ go get -u github.com/kubernetes/kubernetes/cmd/kubectl

Unfortunately it did not for me. I had to delete some old stuff on $GOPATH/src that I think it was outdated and the -u did not handle correctly for some reason. After that it compiled and installed perfectly on $GOPATH/bin. If you do not use gke as a provider you're all set here, but (there's always a but) after get the credentials (more on that later) I got this error:

error: no Auth Provider found for name "gcp"

For some reason it seems the auth provider I need fails to compile and gives no error at all.

So, to solve this I took a peak at the FreeBSD port to see how they do things. Long story short, I downloaded the stable version they use in the port and used the same parameters they use to compile. Basically get the source tarball for 1.18.2 (at the time of writing), then go to kubernetes-1.18.2/cmd/kubectl and compile with those options:

go build -ldflags="-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=18 -X k8s.io/component-base/version.buildDate=$(date +'%Y-%m-%dT%H:%M:%SZ') -X k8s.io/component-base/version.gitCommit=\"\" -X k8s.io/component-base/version.gitVersion=v1.18.2 -X k8s.io/client-go/pkg/version.gitVersion=v1.18.2"

I have the impression that the only one needed is the last -X, but I couldn't be bothered of cheking further. So one can get the configuration for the auth provider as usual right ?

gcloud container clusters get-credentials my-cluster-name

Wrong. For some reason this does not work. The error message urges you to use "application default credentials", so a couple more steps are needed:

gcloud config set container/use_application_default_credentials true
gcloud auth application-default login

And now finally kubectl is working. You'll have to repeat this 3 last steps if you have more than one project or cluster to manage.

kustomize

If you have to suffer Kubernetes and don't know about kustomize. Take a look, you'll thank me later.

It's out of the scope of this article to explain what it is and how to use it (which is a fancy way of saying RTFM).

There's no port for this one either but, it's really easy, just "go get it":

GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v3

fluxctl

Again, no port for this one either. I had to use the same technique as with kubectl because the "go get" was failing with a type mismatch on one of the dependencies k8s.io/client-go/transport/round_trippers.go.

I took a quick look at the code, but the offending lines were there since 2016, so I avoided the potential rabbit hole and went for the easy ride.

Download the last tarball (1.19.0 at the time of writing), go to flux-1.19.0/cmd/fluxctl and then go build.

That went flawlessly.

kubeseal

This one is quite nice to manage sensible data. It keeps the data on the source repo encrypted and it can only be decrypted by the controller installed on the Kubernetes cluster. Again, it's out of the scope ... blah blah ...

Really easy one. Just "go get" it and be happy:

go get -u github.com/bitnami-labs/sealed-secrets/cmd/kubeseal

Conclusion

And finally, I can use all those wonderful commands to manage that fantastic infrastructure from OpenBSD.

To be honest, at least they do a good job to work with each other and with other classic tools, which means they play quite nice with the pipeline/redirection composition ways of the shell.

I really doubt that there's much OpenBSD users managing Kubernetes clusters out there, but maybe this could be useful to somebody.

Have any comments ? Send an email to the comments address.