Featured image of post Migrating from GitHub to Self-Hosted GitLab

Migrating from GitHub to Self-Hosted GitLab

I have had a working setup of GitHub and self-hosted GitHub runners for a while now, and it has been working great.

Every time I push code to my repositories, runners in my home lab pick up the jobs and run them without issues. That works, and I was perfectly happy with it.

However, recently in my development workflow, I found myself repeatedly push images to Docker Hub, only to pull them back down in my home lab. The same goes for the k8s manifests I store in my GitHub repo, although not as resource-intensive.

The setup for GitHub Actions to authenticate with Docker Hub also felt a bit clunky, as I had to create a personal access token and store it as a secret in each repo.

So it occurred to me that I could self-host a GitLab instance in my home lab, which solves both problems: I can push images to the GitLab Container Registry, and store k8s manifests in GitLab repos. Authentication with registry would also be easier, as git repositories and the registry are in the same place.

GitLab has an official helm chart, which made things a lot easier.

Environment

  • Kubernetes: v1.33.4+k3s1
  • ArgoCD: v3.0.6
  • Traefik: 3.3.6
  • Ceph: 19.2.3

My Kubernetes cluster uses Ceph as the storage backend and Ceph RBD as the default storage class, which supports volume expansion. This allowed me to start with a small disk size for GitLab and expand it later as needed.

Prepare GitLab database

For the GitLab helm chart, one database is required. If using Praefect, another one is needed.

I just created a database and user in my existing PostgreSQL cluster:

1
2
3
CREATE DATABASE gitlab;
CREATE USER gitlab WITH ENCRYPTED PASSWORD '<redacted>';
GRANT ALL PRIVILEGES ON DATABASE gitlab TO gitlab;

Prepare Authentik for SAML

Just followed official guide from Authentik: Integrate with GitLab.

info

For some reason, when I set up SAML with Authentik for the first time, I was not able to log in to GitLab. Later I tried again after logging into GitLab with the initial root password first, and it worked.

Deploy GitLab

References:

  1. GitLab Helm chart
  2. Cloud Native GitLab Helm Chart
  3. Use Podman with GitLab Runner on Kubernetes

Non-helm manifests:

secrets.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# initial root password
apiVersion: v1
kind: Secret
metadata:
  name: gitlab-init
  namespace: gitlab
stringData:
  password: <redacted>

---

# Postgres DB password
apiVersion: v1
kind: Secret
metadata:
  name: gitlab-postgres
  namespace: gitlab
stringData:
  password: <redacted>

---

# SAML settings to login with Authentik
apiVersion: v1
kind: Secret
metadata:
  name: gitlab-saml-authentik
  namespace: gitlab
type: Opaque
stringData:
  provider: |
    name: saml
    label: Authentik
    args:
      assertion_consumer_service_url: "<redacted>"
      idp_cert_fingerprint: "<redacted>"
      idp_sso_target_url: "<redacted>"
      issuer: "<redacted>"
      name_identifier_format: "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"
      allowed_clock_drift: 60
      attribute_statements:
        email: ["http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
        first_name: ["http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"]
        nickname: ["http://schemas.goauthentik.io/2021/02/saml/username"]    
expose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: junyi-me-prod
  namespace: gitlab
spec:
  secretName: junyi-me-production
  issuerRef:
    name: letsencrypt-production
    kind: ClusterIssuer
  dnsNames:
  - "*.junyi.me"

---

apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
  name: gitlab-ssh
  namespace: gitlab
spec:
  entryPoints:
  - ssh
  routes:
    - match: HostSNI(`*`)
      services:
        - name: gitlab-gitlab-shell
          port: 22
device-plugin.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# for GitLab Runner to run privileged containers
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fuse-device-plugin-daemonset
  namespace: gitlab
spec:
  selector:
    matchLabels:
      name: fuse-device-plugin-ds
  template:
    metadata:
      labels:
        name: fuse-device-plugin-ds
    spec:
      hostNetwork: true
      containers:
        - image: soolaugust/fuse-device-plugin:v1.0
          name: fuse-device-plugin-ctr
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop: ["ALL"]
          volumeMounts:
            - name: device-plugin
              mountPath: /var/lib/kubelet/device-plugins
      volumes:
        - name: device-plugin
          hostPath:
            path: /var/lib/kubelet/device-plugins

GitLab helm chart as ArgoCD application:

gitlab.yml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  annotations:
    notifications.argoproj.io/subscribe.slack: production
  name: gitlab
  namespace: argocd
spec:
  destination:
    namespace: gitlab
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: http://charts.gitlab.io/
    chart: gitlab
    targetRevision: 9.4.2
    helm:
      valuesObject:
        global:
          application:
            create: false
            links: []
            allowClusterRoles: true

          # hostnames 
          hosts:
            domain: junyi.me
            externalIP: 10.0.69.236
            https: true
            tls:
              enabled: true
              secretName: junyi-me-production
            ssh: git.junyi.me
            gitlab: 
              name: git.junyi.me
              https: true
            registry:
              name: regist.junyi.me
              https: true
            pages:
              name: pages.junyi.me
              https: true

          ingress:
            class: traefik
            configureCertmanager: false
            tls:
              enabled: true
              secretName: junyi-me-production

          psql:
            host: central-rw.postgres.svc.cluster.local
            port: 5432
            database: gitlab
            username: gitlab
            password:
              useSecret: true
              secret: gitlab-postgres
              key: password

          # references secret created above
          initialRootPassword:
            secret: gitlab-init
            key: password

          monitoring:
            enabled: false
          kas:
            enabled: false

          # Omniauth with Authentik SAML
          appConfig:
            omniauth:
              enabled: true
              allowSingleSignOn: ['saml']
              autoSignInWithProvider: 'saml'
              blockAutoCreatedUsers: false
              autoLinkSamlUser: true
              syncProfileFromProvider: ['saml']
              syncProfileAttributes: ['email', 'name']
              providers:
              - secret: gitlab-saml-authentik

        # these are already running in my cluster
        installCertmanager: false
        certmanager:
          installCRDs: false
        nginx-ingress:
          enabled: false
        prometheus:
          install: false
        postgresql:
          install: false

        # GitLab Runner that can run privileged containers
        gitlab-runner:
          runners:
            config: |
              [[runners]]
                [runners.kubernetes]
                  privileged = true
                  allow_privilege_escalation = true
                  [runners.kubernetes.pod_security_context]
                    run_as_non_root = false
                  [runners.kubernetes.build_container_security_context]
                    run_as_user = 0
                    run_as_group = 0
                  [[runners.kubernetes.pod_spec]]
                    name = "device-fuse"
                    patch_type = "strategic"
                    patch = '''
                      containers:
                        - name: build
                          securityContext:
                            privileged: true
                          resources:
                            limits:
                              github.com/fuse: 1
                    '''              

        gitlab:
          toolbox:
            # nightly backup
            backups:
              cron:
                enabled: true
                schedule: "0 2 * * *"
          gitaly:
            persistence:
                # default is 8Gi
              size: 100Gi
        minio:
          persistence:
              # default is 10Gi
            size: 200Gi
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

---

# ArgoCD Application for the non-helm manifests above
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  annotations:
    notifications.argoproj.io/subscribe.slack: production
  name: gitlab-custom
  namespace: argocd
spec:
  destination:
    namespace: gitlab
    server: https://kubernetes.default.svc
  project: default
  source:
    path: kube/gitlab
    repoURL: git@github.com:junyi-me/homelab.git
    targetRevision: master
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Migrate Repositories

Reference: Import your project from GitHub to GitLab

GitLab has a handy feature to import repositories from GitHub. All I needed was a personal access token with read:org and repo scopes (as some of my repos are in an organization),
Token

and enabling import in GitLab.
Import settings

Then I could see all my GitHub repos in GitLab and import them one by one.
Import

CI/CD pipelines

One major difference between GitHub and GitLab is their CI/CD pipelines specification formats.

I used to do docker-in-docker (dind) in GitHub Actions, which required a specific type of runner image. This time, since I was going to re-write all pipeline yml files anyway, I decided to use Podman instead, which did not require such special images.

info

Supposedly it is also possible to ditch privileged containers and use rootless Podman, but since this GitLab instance is only for myself, I did not bother.

An example pipeline to build a container image and push it to the GitLab Container Registry:

gitlab-ci.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
stages:
  - build

build:
  stage: build
  image: quay.io/podman/stable
  only:
    - master
    - stg

  before_script:
    - echo "$CI_REGISTRY_PASSWORD" | podman login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin

  script:
    - |
      set -e

      if [ "$CI_COMMIT_REF_NAME" = 'master' ]; then
        branchTag=prd
      else
        branchTag=$CI_COMMIT_REF_NAME
      fi
      dateTag=$branchTag-$(date +'%Y%m%d')
      echo "Building images with tags: $dateTag and $branchTag"

      podman build -t "$CI_REGISTRY_IMAGE:$dateTag" -t "$CI_REGISTRY_IMAGE:$branchTag" .
      podman push "$CI_REGISTRY_IMAGE:$dateTag"
      podman push "$CI_REGISTRY_IMAGE:$branchTag"      

After confirming the stg branch pipeline had pushed the image: regist.junyi.me/explosion/myself:stg, I merged it to master and saw the prd tag image was also built and pushed successfully: regist.junyi.me/explosion/myself:prd (for this repo).

Backup and restore

References:

  1. Back up and restore overview
    1. Restore GitLab

Backups are taken automatically every night at 2 AM, as configured in the helm chart values above. The problem is that I currently do not have a proper object storage configured, so the backups are stored in the MinIO instance installed by the GitLab helm chart. Which means that when that RBD volume is gone, so are the backups.

So I set up a cronjob to copy the backups to another PVC backed by CephFS, which is replicated 3 times and planned to be backed up using rclone to an external ZFS pool.

backup.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-bck
spec:
  storageClassName: csi-cephfs-sc
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: cephfs.csi.ceph.com
    nodeStageSecretRef:
      name: csi-cephfs-secret
      namespace: ceph-csi-cephfs
    volumeAttributes:
      fsName: bdvault
      clusterID: <redacted>
      staticVolume: "true"
      rootPath: /backup/gitlab
    volumeHandle: gitlab-bck
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gitlab-bck
  namespace: gitlab
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: "csi-cephfs-sc"
  volumeName: gitlab-bck

---

apiVersion: batch/v1
kind: CronJob
metadata:
  name: snatch-gitlab-backup
  namespace: gitlab
spec:
  schedule: "30 3 * * *" # Every day at 3:30 AM, leaving 1.5 hours for the backup job
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
            - name: copy-backups
              image: amazon/aws-cli:2.31.12
              env:
                - name: AWS_ACCESS_KEY_ID
                  valueFrom:
                    secretKeyRef:
                      name: gitlab-minio-secret
                      key: accesskey
                - name: AWS_SECRET_ACCESS_KEY
                  valueFrom:
                    secretKeyRef:
                      name: gitlab-minio-secret
                      key: secretkey
                - name: AWS_DEFAULT_REGION
                  value: us-east-1
              volumeMounts:
                - name: backup-volume
                  mountPath: /backup
              command:
                - /bin/sh
                - -c
                - |
                  set -e
                  echo "Copying GitLab backups from MinIO..."
                  aws --endpoint-url http://gitlab-minio-svc:9000 s3 sync s3://gitlab-backups /backup                  
          volumes:
            - name: backup-volume
              persistentVolumeClaim:
                claimName: gitlab-bck

There is another thing that needs to be backed up: Kubernetes secrets. According to the docs, they are mandatory when restoring a GitLab instance.

GitLab secrets must be restored
To restore a backup, you must also restore the GitLab secrets. If you are migrating to a new GitLab instance, you must copy the GitLab secrets file from the old server. These include the database encryption key, CI/CD variables, and variables used for two-factor authentication. Without the keys, multiple issues occur, including loss of access by users with two-factor authentication enabled, and GitLab Runners cannot sign in.

secret-backup.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab-secret
  namespace: gitlab

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: gitlab-secret-backup-role
  namespace: gitlab
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: gitlab-secret-backup-binding
  namespace: gitlab
subjects:
  - kind: ServiceAccount
    name: gitlab-secret
    namespace: gitlab
roleRef:
  kind: Role
  name: gitlab-secret-backup-role
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: batch/v1
kind: CronJob
metadata:
  name: gitlab-secret-backup
  namespace: gitlab
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: gitlab-secret
          restartPolicy: OnFailure
          containers:
          - name: backup
            image: bitnami/kubectl:latest
            command:
              - /bin/bash
              - -c
              - |
                set -euo pipefail
                TS=$(date +%Y%m%d-%H%M%S)
                OUT="/backup/gitlab-secrets-$TS.yaml"
                echo "Backing up secrets with prefix 'gitlab-' to $OUT"
                kubectl get secrets -o name | grep '^secret/gitlab-' | xargs kubectl get -o yaml > "$OUT"
                echo "Backup complete: $OUT"                
            volumeMounts:
              - name: backup-volume
                mountPath: /backup
          volumes:
            - name: backup-volume
              persistentVolumeClaim:
                claimName: gitlab-bck

I tried triggering the backup job manually, and it produced:

1
-rw-r--r-- 1 root test 113M Oct  9 17:56 1760054105_2025_10_09_18.4.2-ee_gitlab_backup.tar

Enabling Praefect (testing restore)

References:

  1. Gitaly Cluster (Praefect)
  2. Restoring a GitLab installation

So far so good, but for a service as important as GitLab, I want to have high availability.

That is where Praefect comes in. It is a proxy layer in front of Gitaly servers that provides replication and failover. Gitaly is the service that handles all Git repository storage and access in GitLab.

To enable it, I had to

  1. Back up existing repositories (already done)
  2. Create another PostgreSQL database and user for Praefect
  3. Enable Praefect in the GitLab helm chart values
  4. Restore repositories to Praefect

Prepare Praefect database

1
2
3
CREATE DATABASE praefect;
CREATE USER praefect WITH ENCRYPTED PASSWORD '<redacted>';
GRANT ALL PRIVILEGES ON DATABASE praefect TO praefect;

For the backup-utility to work, the praefect and gitlab users need to have SUPERUSER privileges:

1
2
ALTER USER praefect WITH SUPERUSER;
ALTER USER gitlab WITH SUPERUSER;

Spin up Praefect

danger

Data in the existing Gitaly server will NOT be replicated to the new Gitaly servers managed by Praefect.

To enable Praefect, I updated the GitLab helm chart values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
praefect:
enabled: true
virtualStorages:
- name: default
  gitalyReplicas: 3
  maxUnavailable: 1
psql:
  host: central-rw.postgres.svc.cluster.local
  port: 5432
  database: praefect
  username: praefect
dbSecret:
  secret: gitlab-postgres
  key: praefectPassword

This goes under global in the values file:

gitlab.yml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  annotations:
    notifications.argoproj.io/subscribe.slack: production
  name: gitlab
  namespace: argocd
spec:
  destination:
    namespace: gitlab
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: http://charts.gitlab.io/
    chart: gitlab
    targetRevision: 9.4.2
    helm:
      valuesObject:
        global:
          application:
            create: false
            links: []
            allowClusterRoles: true
          hosts:
            domain: junyi.me
            externalIP: 10.0.69.236
            https: true
            tls:
              enabled: true
              secretName: junyi-me-production
            ssh: git.junyi.me
            gitlab: 
              name: git.junyi.me
              https: true
            registry:
              name: regist.junyi.me
              https: true
            pages:
              name: pages.junyi.me
              https: true
          ingress:
            class: traefik
            configureCertmanager: false
            tls:
              enabled: true
              secretName: junyi-me-production
          psql:
            host: central-rw.postgres.svc.cluster.local
            port: 5432
            database: gitlab
            username: gitlab
            password:
              useSecret: true
              secret: gitlab-postgres
              key: password
          initialRootPassword:
            secret: gitlab-init
            key: password
          praefect: # <------------------------------ here
            enabled: true
            virtualStorages:
            - name: default
              gitalyReplicas: 3
              maxUnavailable: 1
            psql:
              host: central-rw.postgres.svc.cluster.local
              port: 5432
              database: praefect
              username: praefect
            dbSecret:
              secret: gitlab-postgres
              key: praefectPassword
          monitoring:
            enabled: false
          kas:
            enabled: false
          appConfig:
            omniauth:
              enabled: true
              allowSingleSignOn: ['saml']
              autoSignInWithProvider: 'saml'
              blockAutoCreatedUsers: false
              autoLinkSamlUser: true
              syncProfileFromProvider: ['saml']
              syncProfileAttributes: ['email', 'name']
              providers:
              - secret: gitlab-saml-authentik
        installCertmanager: false
        certmanager:
          installCRDs: false
        nginx-ingress:
          enabled: false
        prometheus:
          install: false
        postgresql:
          install: false
        gitlab-runner:
          runners:
            config: |
              [[runners]]
                [runners.kubernetes]
                  privileged = true
                  allow_privilege_escalation = true
                  [runners.kubernetes.pod_security_context]
                    run_as_non_root = false
                  [runners.kubernetes.build_container_security_context]
                    run_as_user = 0
                    run_as_group = 0
                  [[runners.kubernetes.pod_spec]]
                    name = "device-fuse"
                    patch_type = "strategic"
                    patch = '''
                      containers:
                        - name: build
                          securityContext:
                            privileged: true
                          resources:
                            limits:
                              github.com/fuse: 1
                    '''              
        gitlab:
          toolbox:
            backups:
              cron:
                enabled: true
                schedule: "0 2 * * *"
          gitaly:
            persistence:
              size: 100Gi
        minio:
          persistence:
            size: 200Gi
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Restore from backup

First, I scaled down the GitLab deployment to avoid any writes during the backup:

1
2
kubectl scale deploy -lapp=sidekiq,release=gitlab -n gitlab --replicas=0
kubectl scale deploy -lapp=webservice,release=gitlab -n gitlab --replicas=0

Then copied the backup tar file into the toolbox pod:

1
2
kubectl cp 1760385703_2025_10_13_18.4.2-ee_gitlab_backup.tar gitlab-toolbox-d85c6c48c-mc2zs:/srv/gitlab/tmp/
kubectl exec gitlab-toolbox-d85c6c48c-mc2zs -it -- bash

And ran the restore command:

1
backup-utility --restore -f file:///srv/gitlab/tmp/1760385703_2025_10_13_18.4.2-ee_gitlab_backup.tar

Once the command completes successfully, I scaled the deployments back up:

1
2
kubectl scale deploy -lapp=sidekiq,release=gitlab -n gitlab --replicas=1
kubectl scale deploy -lapp=webservice,release=gitlab -n gitlab --replicas=2

Restored

Restore DB user privileges

1
2
ALTER USER praefect WITH NOSUPERUSER;
ALTER USER gitlab WITH NOSUPERUSER;

until the day I need them again.

Storage expansion

When I was running a pipeline, I noticed the following error:

1
2
Copying blob sha256:fcf29b65a595766fdf5095f8b2cbd3883c48f744795a020415a567f10816f8fc
Error: writing blob: initiating layer upload to /v2/explosion/portal/blobs/uploads/ in regist.junyi.me: received unexpected HTTP status: 500 Internal Server Error

This indicated that something went wrong with the registry when pushing the container image generated by pipeline.

I checked minio log and found this:

1
2025-10-12T16:24:28-06:00 gitlab-minio-77c798559d-2f2sw minio time="2025-10-12T22:24:28Z" level=error msg="Unable to create an object. /gitlab-artifacts/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/2025_10_12/106/116/job.log" cause="Storage reached its minimum free disk threshold." source="[object-handlers.go:621:objectAPIHandlers.PutObjectHandler()]" stack="/q/.q/sources/gopath/src/github.com/minio/minio/cmd/fs-v1-helpers.go:285:fsCreateFile() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/fs-v1.go:634:fsObjects.PutObject() <autogenerated>:1:(*fsObjects).PutObject() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/object-handlers.go:619:objectAPIHandlers.PutObjectHandler() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/api-router.go:63:(objectAPIHandlers).PutObjectHandler-fm() /opt/go/src/net/http/server.go:1918:HandlerFunc.ServeHTTP() /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/gorilla/mux/mux.go:107:(*Router).ServeHTTP() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/generic-handlers.go:600:rateLimit.ServeHTTP() <autogenerated>:1:(*rateLimit).ServeHTTP() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/generic-handlers.go:558:pathValidityHandler.ServeHTTP() <autogenerated>:1:(*pathValidityHandler).ServeHTTP() /q/.q/sources/gopath/src/github.com/minio/minio/cmd/generic-handlers.go:497:httpStatsHandler.ServeHTTP() <autogenerated>:1:(

Running df in the minio pod showed that the PVC was indeed full:

1
2
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/rbd3            10G            1G   9G        90%  /export

Expanding the PVC resolved the issue.

Conclusion

I am now confident enough to put all my code in this self-hosted GitLab instance and gradually migrate away from GitHub. My current strategy is to migrate any repository I touch into GitLab, and mark the GitHub repo as archived.

There are still some things left to do:

  1. Set up proper object storage for backups and GitLab itself - probably Ceph RGW
  2. Set up a ZFS pool as backup target
  3. Set up monitoring and alerting for GitLab
Built with Hugo
Theme Stack designed by Jimmy