hedgedoc deployment on k8s

Overview

Recently I wanted to try out hedgedoc, the “real-time collaboration” markdown editor which runs in a browser. Neat tool I say. Actually, it’s a successor of “Codimd” which I was using for some time. So without further ado, let’s get down to business.

So first thing is to prepare environment for the deployment. First thing to deploy anything on a kubernetes cluster is to have access to one. For convenience I used microk8s on Ubuntu server 20.04 LTS, but you can use whatever flavor of k8s you want. Here is a list of some suggestions:

But here I will assume that you are using what I used or you know what you are doing and will be able to handle equivalent stuff on your side.

With cluster ready, we will deploy some resources on it - mainly hedgedoc app, but not only. For hedgedoc to work a MySQL DB is needed (or other, but this one is considered to be default by the creator of hedgedoc docker image, so it is easier to just use MySQL). To setup MySQL DB there will be a secret with password required. Also a persistent volume claim. Finally, we want to connect to hedgedoc and we want to do it with style! So for that, we will use ingress resource which will allow us to connect to the app with a real DNS (OK, this DNS will be setup to work only on your PC or in the local network, but to make it publicly available it actually wouldn’t require a lot more to achieve).

Setup microk8s

Note that in the microk8s docs there is information about requirements which are:

I highly recommend to follow official getting started available on the microk8s website, but if want, you can just follow these steps (Note: it is highly recommended to always be sure what command you input into your CLI, especially if they are sourced from an website. Especially if theses commands are using root privileges, here it is indicated by using sudo prefix):

  1. Install microk8s
$ sudo snap install microk8s --classic
  1. join the group
$ sudo usermod -aG microk8s $USER
$ sudo chown -f -R $USER ~/.kube

Restart your session or re-enter the session, so group changes will have effect.

$ su - $USER # re-enter the session
  1. check the status
$ microk8s status --wait-ready
  1. access kubernetes
$ microk8s kubectl get nodes
  1. configure the cluster
$ microk8s enable dns ingress storage
$ microk8s status --wait-ready # check if dns, ingress and storage addons are enabled

If there was no errors along the way, you are good to go. But before we do so it might be a good idea to create an alias for microk8s kubectl, or even use standalone kubectl package, so some configuration will be possible.

To create an alias, you can use your ~/.bashrc, ~/.bash_aliases or even /etc/profile to make it system-wide (if you are using different shell, use proper configuration file). Just append to one of shell config files alias kubectl='microk8s kubectl' like so:

$ echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc

Note that you have to restart/re-enter your session or just source config file:

$ source ~/.bashrc

But if you want to use separet kubectl package, for example to have autocompletion, you have to install it first:

$ snap install kubectl

To use kubectl, get kubernetes config file from the cluster:

$ microk8s config >> ~/.kube/config

If you want autocompletion, follow this guide.

Prepare YAML deployment files

To keep thing clear and organized, first create separate namespace for this deployment:

$ kubectl create ns organizer

Now, create a secret with password for MySQL. Because k8s secrets, by default, are stored as unencrypted base64-encoded strings, we need to encode password to base64 like so:

$ echo -n "admin" | base64 # -n to delete any white characters that 'echo' could create
YWRtaW4=

YWRtaW4= is the password. Now, create a secret with it. Create new YAML file and edit it.

$ mkdri -p k8s/yamls/hedgedoc && cd k8s/yamls/hedgedoc
$ touch mysql-secret.yaml
$ vim mysql-secret.yaml # if you are not familliar with vim, note that you can exit it with ":q!<CR>"
piVersion: v1
kind: Secret
metadata:
  name: mysql-pass
  namespace: organizer
type: Opaque
data:
  password: YWRtaW4=
$ kubectl apply -f mysql-secret.yaml
$ kubectl -n organizer get secrets # -n to specify k8s namespace
NAME                  TYPE                                  DATA   AGE
default-token-v4krs   kubernetes.io/service-account-token   3      5d5h
mysql-pass            Opaque                                1      5d5h

You should be able to see you secret listed.

Another resource to prepare for MySQL should be persistent storage, so no data will be lost at restart of pod with MySQL. This is crucial, because storage is ephemeral by default which means that any changes or data created/modified inside a pod will be lost after pod quits or restarts. But to prevent that, a persistent volumes are available on k8s. There are many types of them, but here a microk8s storage add-on will be used and a PersistentVolumeClaim will just work with it:

$ touch mysql-pvc.yaml && vim mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data-disk
  namespace: organizer
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
$ kubectl apply -f mysql-pvc.yaml
$ kubectl -n organizer get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
mysql-data-disk   Bound    pvc-974fe0fa-cef4-4a21-94df-8e2f6f88f008   2Gi        RWO            microk8s-hostpath   4d22h

I hope you are still there, because there is not much left to do, but for now let’s go and create a MySQL DB deployment. We are using mysql container image to do so. First, the deployment is specified with MySQL image, port 3306 under which DB will be accessible, environmental variable $MYSQL_ROOT_PASSWORD set from the secret, and volume which is using created pvc resource. Underneath deployment, there is also a service resource. It is needed to connect to the DB from another pod on the cluster.

$ touch mysql.yaml && vim mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mysql
  name: mysql
  namespace: organizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7.33
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - name: mysql
          containerPort: 3306
          protocol: TCP
        volumeMounts:
        - name: mysql-data
          mountPath: "/var/lib/mysql"
          subPath: "mysql"
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: mysql-data-disk

---

apiVersion: v1
kind: Service
metadata:
  name: mysql-svc
  labels:
    app: mysql-svc
  namespace: organizer
spec:
  ports:
  - port: 3306
    targetPort: 3306
    protocol: TCP
  selector:
    app: mysql
  type: ClusterIP

$ kubectl apply -f mysql.yaml

So now, we are only one step before deploying hedgedoc. We are using linuxserver/hedgedoc container image, which has some settings to tinker with. But for now, we have to create a new database for the application. Because I want to keep thing simple here that meas we will just “log into” the pod with mysql and create DB manually.

$ kubectl -n organizer get po
NAME                        READY   STATUS    RESTARTS   AGE
mysql-747c646cfc-lg29d      1/1     Running   1          4d20h
$ kubectl -n organizer exec mysql-747c646cfc-lg29d -ti -- bash
root@mysql-747c646cfc-lg29d:/# mysql --user=root --password=$MYSQL_ROOT_PASSWORD 
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 37
Server version: 5.7.33 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database hedgedoc;
Query OK, 1 row affected (0.12 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hedgedoc           |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.07 sec)

mysql> quit;
Bye
root@mysql-747c646cfc-lg29d:/# exit
exit

And finally, create hedgedoc deployment:

$ touch hedgedoc.yaml && vim hedgedoc.yaml

Pay attention to the env section of template specs. We set the following environmental variables like following:

Beneath the deployment resource specification, there are service and ingress resources. With service we are already familiar but ingress is something new here. I don’t want to dwell into the ingress topic, you can read about it on the given website, but basically it is a sort of a load balancer which will, in this specific case, route traffic to the internal k8s service when asked against specific DNS, in this case, hedgedoc.carrot.

apiVersion: apps/v1
kind: Deployment
metadata: 
  labels:
    app: hedgedoc
  name: hedgedoc
  namespace: organizer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hedgedoc
  template:
    metadata:
      name: hedgedoc
      labels:
        app: hedgedoc
    spec:
      containers:
      - image: ghcr.io/linuxserver/hedgedoc
        name: hedgedoc
        env:
        - name: DB_HOST
          value: mysql-svc
        - name: DB_USER
          value: root
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        - name: DB_NAME
          value: hedgedoc
        - name: TZ
          value: Europe/Warsaw
        ports:
        - name: hedgedoc
          containerPort: 3000
          protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  name: hedgedoc-svc
  labels:
    app: hedgedoc-svc
  namespace: organizer
spec:
  ports:
  - port: 3000
    targetPort: 3000
    protocol: TCP
  selector:
    app: hedgedoc
  type: ClusterIP

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hedgedoc-ing
  namespace: organizer
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: "hedgedoc.carrot"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hedgedoc-svc
            port:
              number: 3000

And now, let’s deploy:

$ kubectl apply -f hedgedoc.yaml

Debug deployment

Because this is quite a big image, it might take a while before it will be available (it depends on your internet connection and computing power of your cluster/worker node). To monitor the progress you can issue a command:

$ kubectl -n organizer get events

When hedgedoc will be ready, you should see something similar to this message:

...
4m4s        Normal   Pulled           pod/hedgedoc-5db74bc4b9-qhzjg   Successfully pulled image "ghcr.io/linuxserver/hedgedoc" in 4m29.485681136s
3m53s       Normal   Created          pod/hedgedoc-5db74bc4b9-qhzjg   Created container hedgedoc
3m52s       Normal   Started          pod/hedgedoc-5db74bc4b9-qhzjg   Started container hedgedoc

If it is not the case, or something went wrong, you can see what’s going on with a pod using kubectl describe command like so:

$ kubectl -n organizer describe po hedgedoc-5db74bc4b9-qhzjg
# ... a lot of lines here
# at the end of the output:
Events:
  Type    Reason          Age                From     Message
  ----    ------          ----               ----     -------
  Normal  SandboxChanged  11m (x2 over 11m)  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulling         11m                kubelet  Pulling image "ghcr.io/linuxserver/hedgedoc"
  Normal  Pulled          6m45s              kubelet  Successfully pulled image "ghcr.io/linuxserver/hedgedoc" in 4m29.485681136s
  Normal  Created         6m34s              kubelet  Created container hedgedoc
  Normal  Started         6m33s              kubelet  Started container hedgedoc

In this case, you can see that everything looks fine, but if something would be wrong, this is the first place you will look into. Another situation is that a pod with hedgedoc will actually start, but you won’t be able to connect to it or there will be errors. There is another useful tool for that - kubectl logs:

$ kubectl -n organizer logs hedgedoc-5db74bc4b9-qhzjg
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.                                                                                                                                       
[s6-init] ensuring user provided files have correct perms...exited 0.                                                                                                                                               
[fix-attrs.d] applying ownership & permissions fixes...                                                                                                                                                             
[fix-attrs.d] done.                                                                                                                                                                                                 
[cont-init.d] executing container initialization scripts...                                                                                                                                                         
[cont-init.d] 01-envfile: executing...                                                                                                                                                                              
[cont-init.d] 01-envfile: exited 0.                                                                                                                                                                                 
[cont-init.d] 10-adduser: executing...                                                                                                                                                                              
usermod: no changes                                                                                                                                                                                                 
                                                                                                                                                                                                                    
-------------------------------------                                                                                                                                                                               
          _         ()                                                                                                                                                                                              
         | |  ___   _    __                                                                                                                                                                                         
         | | / __| | |  /  \                                                                                                                                                                                        
         | | \__ \ | | | () |                                                                                                                                                                                       
         |_| |___/ |_|  \__/                                                                                                                                                                                        
                                                                                                                                                                                                                    
                                                                                                                                                                                                                    
Brought to you by linuxserver.io                                                                                                                                                                                    
-------------------------------------                                                                                                                                                                               
                                                                                                                                                                                                                    
To support LSIO projects visit:                                                                                                                                                                                     
https://www.linuxserver.io/donate/                                                                                                                                                                                  
-------------------------------------                                                                                                                                                                               
GID/UID                                                                                                                                                                                                             
-------------------------------------                                                                                                                                                                               
                                                                                                                                                                                                                    
User uid:    911                                                                                                                                                                                                    
User gid:    911                                                                                                                                                                                                    
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing... 
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
Waiting for Mysql service
[cont-init.d] 30-config: exited 0.
[cont-init.d] 99-custom-scripts: executing... 
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-scripts: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Sequelize CLI [Node: 12.22.1, CLI: 5.5.1, ORM: 5.22.3]

Loaded configuration file "../../config/config.json".
No migrations were executed, database schema was already up to date.
2021-04-24T11:13:06.735Z warn:  Neither 'domain' nor 'CMD_DOMAIN' is configured. This can cause issues with various components.
Hint: Make sure 'protocolUseSSL' and 'urlAddPort' or 'CMD_PROTOCOL_USESSL' and 'CMD_URL_ADDPORT' are configured properly.
2021-04-24T11:13:06.739Z warn:  Session secret not set. Using random generated one. Please set `sessionSecret` in your config.json file. All users will be logged out.
2021-04-24T11:13:08.481Z info:  HTTP Server listening at 0.0.0.0:3000

If everything went well, you should see similar output to this.

Connect to hedgedoc

So now what’s left is to connect to the hedgedoc. At the beggining I mentioned that we will do it using DNS. But because I assume that this deployemtn is for testing purpuses only, we won’t bother with a real domain name for now. Insted, we will uses /etc/hosts file in UNIX like system, or equivalent in Windows. I’m not a Windows guru, so please, if this is the case for you, refere to the article I found on digitalcitizen.life.

In GNU/Linux, system is using /etc/hosts file to determin hostname for given IP addreses. If you query hosts against whatis you will get breif description:

$ whatis hosts
hosts (5)            - static table lookup for hostnames

In our case, we will use to to assign DNS “hedgedoc.carrot” to the cluster IP. Thanks to that, we will be able to GET our webapp in a browser on the system.

$ sudo vim /etc/hosts

Add a line to this file like so:

127.0.0.1       localhost

# microk8s cluster on carrot
192.168.0.161   carrot          # in form <IP-addres>  <DNS-name>
192.168.0.161   hedgedoc.carrot

Remember to put there IP address of your cluster!

Now, let’s try to connect to the app. Just open a browser and in the search bar type:

http://hedgedoc.carrot

Your browser might warn you, that this site is not safe nor private, but just proceed to hedgedoc.carrot, because this is after all your site. You can handle yourself, right? After successful GET, you should see similar to this entry in the pod’s log:

...
2021-04-24T11:44:42.263Z info:  10.1.5.96 - - [24/Apr/2021:11:44:42 +0000] "GET /build/1624698a0aa3a39f95fec738b8332d75.woff HTTP/1.1" 200 68892 "https://hedgedoc.carrot/build/font-pack.8b60ca65f33929a11b34.css" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"
2021-04-24T11:44:42.421Z info:  10.1.5.96 - - [24/Apr/2021:11:44:42 +0000] "GET /icons/favicon.ico HTTP/1.1" 200 - "https://hedgedoc.carrot/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"
2021-04-24T11:44:42.512Z info:  10.1.5.96 - - [24/Apr/2021:11:44:42 +0000] "GET /me HTTP/1.1" 200 22 "https://hedgedoc.carrot/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"

Summary

So that’s it, you have your own deployment with hedgedoc! Give yourself a pat on the back.

There is actually a lot of things to explain why and how it works. Along the text I have left a lot links to the official kubernetes docs, where you can read more about how kubernetes cluster works.

If you see an error or would like to have a better explanation about something from this article, go ahead and write to me at webmaster@unexpectd.com. I will be glad for feedback (both positive and negative if you please ;) ).

Cheers!