Frequently Asked Questions
Table of contents
- Can I use individual packages without core?
- What Go version is required?
- Why are configuration functions not thread-safe?
- Why are health checks excluded from tracing and logging?
- How does trace ID propagation work?
- How do I migrate from OpenTracing to OpenTelemetry?
- What is vtprotobuf and why does ColdBrew use it?
- How does ColdBrew ensure API consistency?
- Can I add custom HTTP endpoints that aren’t gRPC?
- Is hystrixprometheus still maintained?
- How do I do cross-package development?
- How do I add custom Prometheus metrics?
- How do I use grpcurl or Postman with my ColdBrew service?
- How do I configure graceful shutdown?
- How do I report errors to Sentry?
- Is ColdBrew designed for Kubernetes?
- How can I improve HTTP gateway performance?
- Where can I get help?
Can I use individual packages without core?
Yes. Every ColdBrew package is an independent Go module. You can use errors, log, tracing, options, grpcpool, or data-builder on their own without importing core. For example:
import "github.com/go-coldbrew/errors"
err := errors.Wrap(originalErr, "failed to process request")
The dependency chain (options → errors → log → ...) only means that log imports errors internally — it does not mean you need to import the full chain.
What Go version is required?
Go 1.25 or later. Older versions are end-of-life and not supported. All ColdBrew packages are tested against the latest stable Go release.
Why are configuration functions not thread-safe?
Functions like interceptors.AddUnaryServerInterceptor(), interceptors.SetFilterFunc(), and log.SetDefault() follow the init-only pattern: they must be called during application startup (in init() or early in main()), before any concurrent access begins.
This is intentional and consistent across the entire codebase. The interceptor chain is assembled once at startup and then read concurrently — adding mutexes would add overhead to every single request for a code path that only runs once.
func init() {
// Safe: called during initialization, before server starts
interceptors.AddUnaryServerInterceptor(context.Background(), myInterceptor)
interceptors.SetFilterFunc(context.Background(), myFilter)
}
Why are health checks excluded from tracing and logging?
Health checks run every few seconds (Kubernetes liveness/readiness probes). Logging and tracing each one would flood your observability systems with noise. By default, ColdBrew filters out healthcheck, readycheck, and serverreflectioninfo methods.
See Filtering response time logs for how to customize which methods are filtered.
How does trace ID propagation work?
ColdBrew generates a unique trace ID for every request automatically. It can also read a trace ID from two sources:
-
HTTP header —
x-trace-id(configurable viaTRACE_HEADER_NAME) is forwarded from the HTTP gateway to gRPC -
Proto field — if your request message has a
trace_idstring field, ColdBrew reads it via the generatedGetTraceId()method
The trace ID is then propagated to structured logs ("trace": "abc123"), Sentry/Rollbar error reports, and OpenTelemetry spans (as the coldbrew.trace_id attribute) — so you can search for one ID and find the complete request flow across your logs, error tracking, and distributed traces.
See the Tracing How-To for details.
How do I migrate from OpenTracing to OpenTelemetry?
The OpenTracing bridge has been removed. ColdBrew now uses OpenTelemetry natively:
- Remove any direct
opentracing.GlobalTracer()calls — useotel.Tracer("my-service")instead - The
tracing.NewInternalSpan(),tracing.NewDatastoreSpan(), andtracing.NewExternalSpan()functions use OpenTelemetry natively - If you had
OTLP_USE_OPENTRACING_BRIDGE=true, remove it — the setting is now ignored (a warning is logged if set totrue) - See the Tracing How-To and Integrations guides for setup details
What is vtprotobuf and why does ColdBrew use it?
vtprotobuf (by PlanetScale) generates optimized MarshalVT()/UnmarshalVT() methods for protobuf messages that are typically 2–3x faster than standard proto.Marshal() with fewer allocations.
ColdBrew registers a custom gRPC codec that uses vtprotobuf automatically. You don’t need to change any application code — if your proto messages have VT methods generated (the default with the cookiecutter template), the fast path is used. Messages without VT methods fall back to standard protobuf transparently.
Key differences from standard protobuf:
| Standard protobuf | vtprotobuf | |
|---|---|---|
| Marshal/Unmarshal | Reflection-based | Generated code, no reflection |
| Performance | Baseline | ~2–3x faster, fewer allocations |
| Extra features | None |
CloneVT(), EqualVT(), object pooling |
| Compatibility | Universal | Falls back to standard if VT methods missing |
vtprotobuf only affects the gRPC wire protocol. The HTTP/JSON gateway uses grpc-gateway’s own marshallers independently.
To disable: DISABLE_VT_PROTOBUF=true. See the vtprotobuf How-To for full details including code generation setup.
How does ColdBrew ensure API consistency?
Through compile-time enforcement. Your .proto file is the single source of truth. Running buf generate produces:
- Typed Go interfaces — the compiler refuses to build until every RPC method is implemented
- HTTP gateway handlers — REST endpoints that can’t drift from the gRPC definition
- OpenAPI spec — Swagger documentation generated from the same proto, always in sync
This strongly prevents a documented endpoint that doesn’t exist, an undocumented endpoint that does exist, or an HTTP route that doesn’t match the gRPC method signature. The proto file is the contract — the compiler, the gateway, and the docs all enforce it.
See Self-Documenting APIs for the full pipeline.
Can I add custom HTTP endpoints that aren’t gRPC?
Yes. ColdBrew is gRPC-first, but the grpc-gateway runtime.ServeMux passed to InitHTTP supports custom HTTP routes via HandlePath. You can register webhooks, file uploads, OAuth callbacks, or any raw HTTP handler alongside your gateway routes:
if err := mux.HandlePath("POST", "/webhooks/stripe", func(w http.ResponseWriter, r *http.Request, _ map[string]string) {
// raw HTTP — no proto marshalling
}); err != nil {
return err
}
These routes go through ColdBrew’s HTTP middleware (compression, tracing, New Relic) automatically.
See Custom HTTP Routes for full examples including static file serving and path parameters.
Is hystrixprometheus still maintained?
No. The hystrixprometheus package depends on afex/hystrix-go, which is unmaintained. Do not invest in this package for new projects.
For circuit breaking, consider failsafe-go as an alternative. The client-side interceptors in the interceptors package provide retry and circuit breaking functionality that covers most use cases.
How do I do cross-package development?
When making changes that span multiple ColdBrew packages:
-
Work in dependency order:
optionsfirst,corelast -
Use
replacedirectives ingo.modto point to local checkouts during development:replace github.com/go-coldbrew/errors => ../errors - Remove all
replacedirectives before committing -
Publish in order: After merging upstream packages, bump versions in downstream
go.modfiles following the dependency chain
How do I add custom Prometheus metrics?
ColdBrew exposes Prometheus metrics at /metrics automatically. Projects generated from the ColdBrew cookiecutter include a service/metrics/ package with an interface-based pattern:
// Add a method to the Metrics interface in service/metrics/types.go
type Metrics interface {
IncOrderTotal(outcome string)
ObserveOrderDuration(outcome string, duration time.Duration)
}
// Implement it in service/metrics/metrics.go using promauto
// Then use it in your handler with the defer pattern:
defer func() {
s.monitoring.IncOrderTotal(outcome)
s.monitoring.ObserveOrderDuration(outcome, time.Since(start))
}()
Run make mock after changing the interface to regenerate mocks for testing. See the Metrics How-To for the full pattern including label constants and duration conventions.
How do I use grpcurl or Postman with my ColdBrew service?
ColdBrew enables gRPC server reflection by default, so tools like grpcurl, grpcui, and Postman can discover your services and methods without needing proto files.
# List all services
grpcurl -plaintext localhost:9090 list
# Describe a specific service
grpcurl -plaintext localhost:9090 describe mypackage.MyService
# Call a method
grpcurl -plaintext -d '{"msg": "hello"}' localhost:9090 mypackage.MyService/Echo
To disable reflection (e.g., in production for security), set DISABLE_GRPC_REFLECTION=true. See the Configuration Reference for details.
How do I configure graceful shutdown?
ColdBrew handles SIGTERM and SIGINT automatically. When a signal is received:
- The service is marked as not ready (
/readycheckreturns unhealthy) - Kubernetes stops routing new traffic
- In-flight requests are allowed to complete
- The server shuts down cleanly
You can register cleanup callbacks and customize shutdown behavior. See the Signals How-To for details.
How do I report errors to Sentry?
Set the SENTRY_DSN environment variable and use the errors package:
import (
"context"
"github.com/go-coldbrew/errors/notifier"
)
// This notifies Sentry asynchronously (bounded, won't leak goroutines)
ctx := context.Background()
notifier.Notify(err, ctx)
See the Errors How-To and Integrations for full setup instructions.
Is ColdBrew designed for Kubernetes?
Yes — ColdBrew is Kubernetes-native by design. Out of the box you get:
-
Liveness probe at
/healthcheckand readiness probe at/readycheck -
Graceful shutdown on SIGTERM with configurable drain periods (
SHUTDOWN_DURATION_IN_SECONDS,GRPC_GRACEFUL_DURATION_IN_SECONDS) -
Prometheus metrics at
/metricsfor scraping - Structured JSON logging to stdout (ready for Fluentd, Loki, or any log aggregator)
- Environment variable configuration via envconfig — works natively with ConfigMaps and Secrets
ColdBrew also follows 12-factor app principles: no config files, stateless processes, port binding, and log streams. See the Production Deployment guide for K8s manifests, ServiceMonitor setup, and graceful shutdown tuning, and the Architecture page for the full design principles table.
How can I improve HTTP gateway performance?
Two options, depending on how much latency reduction you need:
Option 1: Unix domain socket (easiest)
export DISABLE_UNIX_GATEWAY=false
This routes the gateway’s internal connection through a Unix socket instead of TCP loopback, reducing latency from ~67µs to ~36µs (1.9x faster). No code changes required — just set the environment variable. Note: automatically skipped when gRPC TLS is configured.
Option 2: In-process gateway via DoHTTPtoGRPC (fastest)
Use RegisterHandlerServer instead of RegisterHandlerFromEndpoint in your InitHTTP, and wrap each gRPC method with interceptors.DoHTTPtoGRPC(). This eliminates all network overhead (~19µs) while preserving the full interceptor chain. Requires per-method wrappers — see the Architecture page for a code example.
Where can I get help?
- GitHub Discussions — Ask questions, share ideas
- GitHub Issues — Report bugs
- How-To Guides — Step-by-step guides for common tasks
- Integrations — Third-party service setup