I had been a happy user of Firefox’s built-in password manager for years. It had been working seamlessly across my devices, and honestly there was nothing wrong with it.
However, I recently stumbled upon Vaultwarden, and couldn’t resist the urge to try self-hosting my own password manager.
Vaultwarden provides an official Docker image, so I deployed it on my Kubernetes cluster as usual.
Environment
- Kubernetes cluster: v1.33.3+k3s1
- ArgoCD: 9.1.4
- HashiCorp Vault: 0.31.0
- External Secrets Operator: 1.2.0
- CloudNativePG: 0.26.1
- cert-manager: v1.18.0
To be deployed:
- Vaultwarden: 1.35.2
Server deployment
Secrets
Since Vaultwarden can use PostgreSQL as its database backend, I just created a new DB in my existing CloudNativePG cluster.
1
2
3
|
CREATE DATABASE vaultwarden;
CREATE USER vaultwarden WITH ENCRYPTED PASSWORD 'some_password';
GRANT ALL PRIVILEGES ON DATABASE vaultwarden TO vaultwarden;
|
Then I stored credentials in my Vault and created an ExternalSecret to use them in the Vaultwarden deployment.
vaultwarden/db
1
2
3
4
5
|
{
"DB_NAME": "vaultwarden",
"DB_PASSWORD": "some_password",
"DB_USER": "vaultwarden"
}
|
I also created OIDC credentials to enable login with Authentik.
vaultwarden/oidc
1
2
3
4
5
|
{
"SSO_AUTHORITY": "https://auth.junyi.me/application/o/vaultwarden/",
"SSO_CLIENT_ID": "vaultwarden-client-id",
"SSO_CLIENT_SECRET": "vaultwarden-client-secret",
}
|
secrets.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vaultwarden-db
namespace: vaultwarden
spec:
secretStoreRef:
name: vault
kind: ClusterSecretStore
target:
name: vaultwarden-db
dataFrom:
- extract:
key: secret/vaultwarden/db
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vaultwarden-oidc
namespace: vaultwarden
spec:
secretStoreRef:
name: vault
kind: ClusterSecretStore
target:
name: vaultwarden-oidc
dataFrom:
- extract:
key: secret/vaultwarden/oidc
|
Persistence
Vaultwarden needs persistent storage for its data, such as attachments and RSA keys. I used CephFS for this.
Initially I tried not using any persistent storage, but ran into an issue that every time the pod restarted, the RSA keys would be regenerated, forcing logout of all clients.
So I threw the CephFS PVC at it although I don’t plan to store any attachments.
pvc.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
apiVersion: v1
kind: PersistentVolume
metadata:
name: vaultwarden
spec:
storageClassName: cephfs-sdvault-sc
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret
namespace: ceph-csi-cephfs
volumeAttributes:
"fsName": "sdvault"
"clusterID": "[CLUSTER_ID]"
"staticVolume": "true"
"rootPath": /vaultwarden
volumeHandle: vaultwarden
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultwarden
namespace: vaultwarden
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: cephfs-sdvault-sc
volumeMode: Filesystem
volumeName: vaultwarden
|
Deployment and Service
With secrets and persistence in place, it was time for the deployment.
deployment.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultwarden
namespace: vaultwarden
spec:
replicas: 1
selector:
matchLabels:
app: vaultwarden
template:
metadata:
labels:
app: vaultwarden
spec:
containers:
- name: vaultwarden
image: vaultwarden/server:1.35.2
ports:
- containerPort: 80
env:
- name: DOMAIN
value: "https://vw.junyi.me"
- name: SIGNUPS_ALLOWED
value: "false"
- name: DATABASE_URL
value: "postgresql://$(DB_USER):$(DB_PASSWORD)@central-rw.postgres.svc.cluster.local/$(DB_NAME)"
- name: SSO_ENABLED
value: "true"
# Wanted to do this, but OIDC doesn't work with mobile Firefox
# - name: SSO_ONLY
# value: "true"
- name: SSO_SIGNUPS_MATCH_EMAIL
value: "true"
- name: SSO_SCOPES
value: "openid email profile offline_access"
- name: SSO_CLIENT_CACHE_EXPIRATION
value: "0"
- name: SSO_ALLOW_UNKNOWN_EMAIL_VERIFICATION
value: "false"
envFrom:
- secretRef:
name: vaultwarden-db
- secretRef:
name: vaultwarden-oidc
volumeMounts:
- mountPath: /data
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: vaultwarden
|
For the Authentik OIDC integration guide, I referred to Authentik’s community guide as usual: Integrate with Vaultwarden
Ideally I wanted to enforce OIDC login only by setting SSO_ONLY to true, but the Bitwarden addon on mobile Firefox wouldn’t let me login via Authentik.
service.yml
1
2
3
4
5
6
7
8
9
10
11
12
|
apiVersion: v1
kind: Service
metadata:
name: vaultwarden
namespace: vaultwarden
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: vaultwarden
|
Exposing the server
Since I want to use Vaultwarden from anywhere with my phone and computers, I created an Ingress to expose it through Traefik, along with a TLS certificate managed by cert-manager.
expose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: junyi-me-prod
namespace: vaultwarden
spec:
secretName: junyi-me-prod
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
dnsNames:
- "vw.junyi.me"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vaultwarden
namespace: vaultwarden
spec:
ingressClassName: traefik
rules:
- host: vw.junyi.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vaultwarden
port:
number: 80
tls:
- hosts:
- "vw.junyi.me"
secretName: junyi-me-prod
|
Firefox addon
At the moment the only client I use is Firefox with the Bitwarden addon.
The integration was very straightforward, and seemed to work flawlessly in most cases.
The only issue I encountered was that the mobile Firefox addon wouldn’t automatically fill in login forms, or suggest anything.
Reviews on the addon page seemed to suggest this was due to a recent update, so I can only hope it gets fixed in future releases.