Featured image of post Setting up an internal network and DNS with Kubernetes, traefik, and bind9

Setting up an internal network and DNS with Kubernetes, traefik, and bind9

Having an internal network + DNS service is a common practice in a corporate network. It serves to completely separate internal and external network traffic, and provide the convenience of using domain names for internal services.

With a kubernetes cluster at home, it is pretty simple to replicate the setup at home. In my case, it provided the following benefits:

  1. Use of domain names for internal services
  2. HTTPS traffic between internal services
  3. Easy to add new services
  4. No need to worry about IP address depletion (by using reverse proxy)

Prerequisites

  1. A Kubernetes cluster
  2. A domain name pointing to your cluster ingress controller
  3. traefik installed in your cluster

Configure traefik

If you installed your traefik using helm chart, you can simply apply the following HelmChartConfig to your cluster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      internal:
        port: 8081
      internalsecure:
        port: 8043
        tls:
          enabled: true    

This will create the following endpoints that we can later refer to in ingress resources:

  1. internal: for HTTP traffic
  2. internalsecure: for HTTPS traffic

They will be separate from the default traefik endpoints web and websecure, which are typically used for external traffic.

We also need to add a service to expose the internal traefik endpoints:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
  name: traefik-internal
  namespace: kube-system
spec:
  selector:
    app.kubernetes.io/name: traefik
  ports:
    - name: internal
      protocol: TCP
      port: 80
      targetPort: 8081
    - name: internal-secure
      protocol: TCP
      port: 443
      targetPort: 8043
  type: LoadBalancer
  loadBalancerIP: <internal-ip>

The loadBalancerIP will later be referenced in the DNS configuration.

Configure bind9

For internal DNS service, I chose bind9, which can easily be deployed on Kubernetes.

Sample configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
apiVersion: v1
kind: ConfigMap
metadata:
  name: bind9-cm
  namespace: network
data:
  named.conf: |
    options {
        directory "/var/cache/bind";
        recursion yes;
        allow-recursion { any; };
        allow-query { any; };
        allow-query-cache { any; };
        allow-transfer { none; };
        allow-update { none; };
        forwarders {
          8.8.8.8;
          1.1.1.1;
        };
    };

    zone "junyi.me" {
        type master;
        file "/etc/bind/zones/junyi.me.db";
        allow-query { any; };
    };    

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: bind9-zone-cm
  namespace: network
data:
  junyi.me.db: |
    $TTL 86400
    @ IN SOA ns1.junyi.me. admin.junyi.me. (
        2021072101 ; Serial
        3600 ; Refresh
        1800 ; Retry
        604800 ; Expire
        86400 ; Minimum TTL
    )

    ; Nameserver
    @       IN  NS  ns1.junyi.me.
    
    ; external domain
    @       IN  A   <external-ip>
    *       IN  A   <external-ip>

    ; internal domain
    i     IN  A   <internal-ip>
    *.i     IN  A   <internal-ip>

    ; Define the nameserver itself
    ns1     IN  A   <external-ip>    

Since I already own the domain name junyi.me which points to my cluster’s ingress controller (traefik) on Cloudflare’s DNS server, I just reused it for internal domain names as well. This has several benefits:

  1. I can use the same SSL certificates for my services, so same services/ingresses can be used internally and externally with HTTPS
  2. Same domain names can be used from inside and outside my home network, so I (my browser) don’t have to remember different domain names for the same services

This will tell bind9 to resolve *.i.junyi.me to the internal IP, and *.junyi.me to the external IP.

Note

“external IP” here does not mean the publically accessible IP address. It is the service IP address (like loadBalancerIP) of traefik that is accessible in the private network.

Deploy the bind9 service:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
apiVersion: apps/v1
kind: Deployment
metadata:
  name: bind9-deployment
  namespace: network
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bind9
  template:
    metadata:
      labels:
        app: bind9
    spec:
      containers:
      - name: bind9
        image: ubuntu/bind9:9.18-22.04_beta
        env:
        volumeMounts:
        - name: bind9-config
          mountPath: /etc/bind
        - name: bind9-zone-config
          mountPath: /etc/bind/zones
        ports:
        - containerPort: 53
          protocol: TCP
        - containerPort: 53
          protocol: UDP
      volumes:
      - name: bind9-config
        configMap:
          name: bind9-cm
      - name: bind9-zone-config
        configMap:
          name: bind9-zone-cm

---

apiVersion: v1
kind: Service
metadata:
  name: bind9-service
  namespace: network
  labels:
    app: bind9
spec:
  selector:
    app: bind9
  type: LoadBalancer
  loadBalancerIP: <dns-ip>
  ports:
  - protocol: TCP
    port: 53
    targetPort: 53
    name: bind9-tcp
  - protocol: UDP
    port: 53
    targetPort: 53
    name: bind9

<dns-ip> will be the IP address of DNS service, which can be used like a normal DNS server in your network.

Test it by running something like this:

1
nslookup blog.i.junyi.me <dns-ip>

I have configured my router to give out this IP address as the default DNS server in DHCP.

Deploy a service

To deploy something into the internal network, just take any existing deployment and change its ingress resource to use the internal traefik endpoints. For example, this is my staging deployment of this blog service.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blog
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: internal, internalsecure
spec:
  rules:
    - host: blog.i.junyi.me
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: blog
              port:
                number: 80
  tls:
    - hosts:
        - "blog.i.junyi.me"
      secretName: junyi-me-production

This will make the staging blog service accessible at blog.i.junyi.me from inside the network.
Inside home network

But not accessible from outside the network.
Outside home network

The ingress resources without the traefik.ingress.kubernetes.io/router.entrypoints annotation will still be accessible from outside the network. This blog is an example.

Conclusion

I tried several ways to make this setup happen, but this approach seems to be the most straightforward and maintainable.

Looking into the values.yml on the traefik helm chart repo helped a lot.

Built with Hugo
Theme Stack designed by Jimmy