gRPC is a newer remote procedure call (RPC) framework developed by Google that uses HTTP/2 for transport. Since it provides a significant performance boost over traditional REST APIs, it has become increasingly popular for building microservices and backend systems.
The reason I wanted to explore gRPC is the other great benefit it provides: code generation, and thus type safety, across multiple languages.
As I kept working on one of my personal projects, it became more and more cumbersome to maintain the REST API between my Go backend and SvelteKit frontend. Every time I made a change to the API, I risked breaking something on either side without a warning sign until runtime.
I figured gRPC could be the missing piece here, as it allowed me to define my API in a separate repository (single source of truth) using Protocol Buffers (protobufs), and then generate client and server code for both Go and TypeScript.
Goals
As a first-time user of gRPC, I decided to start small.
Currently I have a backend Go server spawner that polls outstanding jobs from a database and spawns worker processes to process them.
There is also a SvelteKit frontend vfront that allows users to control whether or not the spawner should process jobs.
When the user toggles a switch in the frontend,
![]()
- it sends an HTTP request to SvelteKit backend (this is type-safe already)
- which then sends an HTTP request to the Go
spawnerbackend. (this is currently NOT type-safe)
So the second step is what I want to convert to gRPC.
Setting up the dev environment
I installed the following to my system:
- buf - a protobuf build tool, works like
makefor protobufs - cargo - Rust package manager, needed for protol
- protols - language server for protobufs, available in mason.nvim
buf:
|
|
Files
For my project, I created a separate repository for the protobuf definitions, which I named vault-api. The directory structure looks like this:
|
|
spawner.proto defines the gRPC service and messages for the spawner backend. It’s like the API contract between the frontend and backend.
spawner.proto
|
|
buf.yaml is the configuration file for buf, specifying the version and modules. There are so much more to this config file, documented here: buf.yaml v2 config file.
buf.yaml
|
|
buf.gen.yaml is the generation configuration file for buf, specifying the plugins and output directories for generated code. Also documented here: buf.gen.yaml v2 config file.
buf.gen.yaml
|
|
I opted for nice-grpc instead of the more popular grpc-node, since it provided a smoother workflow. For example, with nice-grpc, I could use async/await syntax directly in my service implementations like:
|
|
whereas with grpc-node, I would have to use callbacks or promisify the methods manually.
Lastly, protol.toml is the configuration file for protols. it’s not necessary for the project so far, but I needed this to tell protols the import path for protobuf files when the project structure became more complex.
protol.toml
|
|
In this example, it allows me to do
|
|
to import a file at proto/spawner/job.proto. Protols would usually try to resolve imports relative to cwd, but with this config, it knows to look inside the proto directory as well.
Code generation time
In this setup, code generation is done using buf command. First, there are some preparation steps for SvelteKit.
SvelteKit (TypeScript):
|
|
Then run:
|
|
This will generate the Go and TypeScript code in the specified output directories (gen/go and gen/ts).
|
|
For Go, there is one more step:
|
|
Using the generated code in SvelteKit
To test locally, I used the npm link method to symlink the generated TypeScript code into my SvelteKit project.
In vault-api repo:
|
|
In the consuming project,
package.json
1
2
3
"dependencies": {
"@vault/vault-grpc": "^0.0.1",
}
Then
|
|
Now I could do something like this in the backend code:
src/lib/server/grpc/spawner.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
import { createChannel, createClient } from 'nice-grpc';
import { ConfigServiceDefinition } from '@vault/vault-grpc/spawner/spawner';
import { env } from '$env/dynamic/private';
if (!env.SPAWNER_URL) {
throw new Error('Missing SPAWNER_URL environment variable');
}
const channel = createChannel(env.SPAWNER_URL);
export const spawner = createClient(ConfigServiceDefinition, channel);
// Usage
await spawner.updateConfig(cfg);
SPAWNER_URL was set to localhost:8080 where the Go gRPC server was going to run.
Using the generated code in Go
In the Go spawner backend, I set up the server like this:
go.mod
1
2
3
4
5
6
require (
git.junyi.me/vault/vault-api/gen/go v0.0.1
)
// ...
replace git.junyi.me/vault/vault-api/gen/go => ../vault-api/gen/go
cmd/spawner/main.go
|
|
That’s how I got everything working.
Publishing npm package
Of course, local testing is not the end goal. I needed to make it work in the production build as well.
For SvelteKit, I published the generated TypeScript code as a private npm package to GitLab package registry. This was done through a GitLab CI/CD pipeline in the vault-api repository.
.gitlab-ci.yml
|
|
One caveat here is that whenever there is a change to the generated Typescript code, I need to manually bump the version number in gen/ts/package.json before pushing to master. GitLab does not allow overwriting an existing package version.
It is possible to let GitLab generate a new version number automatically using CI/CD variables, as documented in: Publish npm packages to the GitLab package registry using semantic-release.
However, I figured that since I had to manually tag releases for Go package anyway, it was simpler to just bump the version number manually. It also makes it easier to sync version numbers between Go and TypeScript packages.
I pushed the changes to master, admired the pipeline run, and moved on.
Consuming npm package
Since the package is private, I needed to set up authentication in the consuming SvelteKit project, by adding a .npmrc file with the following content:
.npmrc
|
|
where 1 is the project ID of the vault-api repository, and REG_TOKEN is a CI/CD variable containing a GitLab personal access token with read_package_registry scope.
package.json already contains the dependency I used for local testing, so I just needed to make sure the local link was removed, and install the package from GitLab:
|
|
Dockerfile and pipeline manifest also needed update to be able to use a private npm package:
Dockerfile
|
|
.gitlab-ci.yml
|
|
With that, the SvelteKit app was ready, and I enjoyed another successful pipeline run.
Publishing Go package
Publishing the Go package was simpler. I just needed to tag a new release in the vault-api repository, and Go modules would take care of the rest.
One caveat is that, since the Go package is located under gen/go, the tag needs to include that path. For example, for version v0.0.1, I ran:
|
|
(after pushing the changes to master, of course)
Consuming Go package
Although it is possible to go with a similar approach as SvelteKit and use a token to authenticate with GitLab, since my Go project already contained some other private packages, I opted to use go mod vendor to push all dependencies into the repository.
First, I removed the replace directive in go.mod:
|
|
Then I ran:
|
|
I did a test run and everything looked good, so I just pushed the changes. No Dockerfile and pipeline changes necessary here.
Updating proto files
When there are API changes and I need to update the proto files, the process is as follows:
- make changes to the proto files in
vault-apirepository - run
buf generateto regenerate the code - bump version numbers in
gen/ts/package.jsonand tag a new release for Go package - push changes to
masterto trigger the CI/CD pipeline for publishing the packages - update the consuming projects to use the new package versions
Conclusion
For me, the experience of using gRPC is quite positive so far. The type safety alone is worth the effort of setting it up. It’s so much better than having to define and maintain two sets of API contracts in REST.