Featured image of post Load balancer for Proxmox cluster

Load balancer for Proxmox cluster

Up until recently I had been accessing my Proxmox managment console like https://10.0.78.2:8006. When that node is down, I would access the next one like https://10.0.78.3:8006, and so on…

Wouldn’t it be nice if you could just hit an url like https://pmx.internal.net and automatically get redirected to an available node in the cluster?

I came across HAProxy that made this possible.

HAProxy configuration

HAProxy is a free open-source software that provides a high availability load balancer, which I found suitable for this task. It also has an official docker image, which I used to deploy it in my kubernetes cluster.

This is the configuration I used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
    global
        stats timeout 30s

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

    defaults
        log    global
        mode    http
        option    httplog
        option    dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000

    frontend web
        bind *:443
        mode tcp
        option tcplog
        default_backend pveweb

    backend pveweb
        mode tcp
        balance source
        server opx01 10.0.78.2:8006 check
        server opx02 10.0.78.3:8006 check
        server opx03 10.0.78.4:8006 check
        server opx04 10.0.78.5:8006 check
        server opx05 10.0.78.6:8006 check

    frontend spice
        bind *:3128
        mode tcp
        option tcplog
        default_backend pvespice

    backend pvespice
        mode tcp
        balance source
        server opx01 10.0.78.2:3128 check
        server opx02 10.0.78.3:3128 check
        server opx03 10.0.78.4:3128 check
        server opx04 10.0.78.5:3128 check
        server opx05 10.0.78.6:3128 check

This configuration would load balance both the Proxmox web interface and the SPICE console connections across the nodes in the cluster.

Note

I did not go into getting the SSL certificates right, so keep in mind that this configuration would cause certificate warnings in browsers.

Deploying HAProxy

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
apiVersion: v1
kind: ConfigMap
metadata:
  name: pmx-proxy
  namespace: network
data:
  haproxy.cfg: |
    global
        stats timeout 30s

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

    defaults
        log    global
        mode    http
        option    httplog
        option    dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000

    frontend web
        bind *:443
        mode tcp
        option tcplog
        default_backend pveweb

    backend pveweb
        mode tcp
        balance source
        server opx01 10.0.78.2:8006 check
        server opx02 10.0.78.3:8006 check
        server opx03 10.0.78.4:8006 check
        server opx04 10.0.78.5:8006 check
        server opx05 10.0.78.6:8006 check

    frontend spice
        bind *:3128
        mode tcp
        option tcplog
        default_backend pvespice

    backend pvespice
        mode tcp
        balance source
        server opx01 10.0.78.2:3128 check
        server opx02 10.0.78.3:3128 check
        server opx03 10.0.78.4:3128 check
        server opx04 10.0.78.5:3128 check
        server opx05 10.0.78.6:3128 check    
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pmx-proxy
  namespace: network
  labels:
    app: pmx-proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pmx-proxy
  template:
    metadata:
      labels:
        app: pmx-proxy
    spec:
      containers:
        - name: pmx-proxy
          image: haproxy:bookworm
          ports:
            - containerPort: 80
            - containerPort: 443
          volumeMounts:
            - name: config-volume
              mountPath: /usr/local/etc/haproxy/haproxy.cfg
              subPath: haproxy.cfg
      volumes:
        - name: config-volume
          configMap:
            name: pmx-proxy

---

apiVersion: v1
kind: Service
metadata:
  name: pmx-proxy
  namespace: network
spec:
  type: LoadBalancer
  loadBalancerIP: 10.0.78.252
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
  selector:
    app: pmx-proxy

Exposing the service

Since this is a service for internal use only, I exposed it in my internal network, which is set up here: Setting up an internal network and DNS with Kubernetes, traefik, and bind9

Simply adding an A record in my DNS zone config (in my case, for domain junyi.me) did the trick:

1
pmx.i     IN  A   10.0.78.252

Now the Proxmox cluster can be accessed at https://pmx.i.junyi.me from my internal network.

Built with Hugo
Theme Stack designed by Jimmy