ColdBrew
A Kubernetes-native Go microservice framework for building production-grade gRPC services with built-in observability, resilience, and HTTP gateway support. Follows 12-factor principles out of the box.
Production-proven: Powers 100+ microservices, handling peaks of ~70k QPS per service at Gojek.
Get Started View Packages How To GitHub
What You Get Out of the Box
| Feature | Description |
|---|---|
| gRPC + REST Gateway | Define your API once in protobuf — get gRPC, REST, and Swagger docs automatically via grpc-gateway. HTTP gateway supports JSON, application/proto, and application/protobuf content types out of the box |
| Structured Logging | Pluggable backends — slog (default), zap, go-kit, logrus — with per-request context fields and trace ID propagation |
| Distributed Tracing | OpenTelemetry and New Relic support with automatic span creation in interceptors — traces can be sent to any OTLP-compatible backend including Jaeger |
| Prometheus Metrics | Built-in request latency, error rate, and circuit breaker metrics at /metrics
|
| Error Tracking | Stack traces, gRPC status codes, and async notification to Sentry, Rollbar, or Airbrake |
| Resilience | Client-side circuit breaking and retries via interceptors |
| Fast Serialization | vtprotobuf codec enabled by default — faster gRPC marshalling with automatic fallback to standard protobuf |
| Kubernetes-native | Health/ready probes, graceful SIGTERM shutdown, structured JSON logs, Prometheus metrics — all wired automatically |
| Swagger / OpenAPI | Interactive API docs auto-served at /swagger/ from your protobuf definitions |
| Profiling | Go pprof endpoints at /debug/pprof/ for CPU, memory, goroutine, and trace profiling |
| gRPC Reflection | Server reflection enabled by default — works with grpcurl, grpcui, and Postman |
| HTTP Compression | Automatic gzip and zstd compression for all HTTP gateway responses (content-negotiated via Accept-Encoding) |
| Container-aware Runtime | Auto-tunes GOMAXPROCS to match container CPU limits via automaxprocs |
| CI/CD Pipelines | Ready-to-use GitHub Actions and GitLab CI workflows for build, test, lint, coverage, and benchmarks |
Quick Start
Generate a new service in seconds:
# Install cookiecutter
brew install cookiecutter # or: pip install cookiecutter
# Generate a new service
cookiecutter gh:go-coldbrew/cookiecutter-coldbrew
# Build and run
cd MyService/
make run
Your service starts with all of these endpoints ready:
| Endpoint | Description |
|---|---|
localhost:9090 | gRPC server |
localhost:9091 | HTTP/REST gateway (auto-mapped from gRPC) |
localhost:9091/metrics | Prometheus metrics |
localhost:9091/healthcheck | Liveness probe — returns build/version info as JSON |
localhost:9091/readycheck | Readiness probe — returns version JSON when ready |
localhost:9091/swagger/ | Swagger UI |
localhost:9091/debug/pprof/ | Go pprof profiling |
Define Once, Get Everything
Your API is defined once in protobuf — ColdBrew generates everything else:
rpc Echo(EchoRequest) returns (EchoResponse) {
option (google.api.http) = {
post: "/api/v1/echo"
body: "*"
};
}
This single definition gives you:
-
gRPC endpoint on
:9090— with reflection for grpcurl and Postman -
REST endpoint at
POST /api/v1/echoon:9091— via grpc-gateway -
Swagger UI at
/swagger/— interactive API docs from your proto - Prometheus metrics — per-method latency, error rate, and request count
- Distributed tracing — automatic span creation through the interceptor chain
Run buf generate — it creates typed Go interfaces from your proto definitions. The compiler ensures every RPC method is implemented, so API changes are caught at build time, not runtime. Just fill in your business logic and make run. Logging, tracing, metrics, health checks, and graceful shutdown are wired automatically. See the full pipeline for details.
How It Works
┌─────────────────────────────────────────┐
│ ColdBrew Core │
HTTP Request ──► │ ┌─────────┐ ┌────────────────────┐ │
│ │ HTTP │ │ Interceptor Chain │ │
│ │ Gateway │──► │ │ │
│ │ (grpc- │ │ ► Response Time │ │
gRPC Request ──► │ │ gateway)│ │ ► Trace ID │ │
│ └─────────┘ │ ► OpenTelemetry │ │
│ │ │ ► Prometheus │ │
│ ▼ │ ► Error Notify │ │
│ ┌─────────┐ │ ► Panic Recovery │ │
│ │ gRPC │──► │ │ │──► Your Handler
│ │ Server │ │ │ │
│ └─────────┘ └────────────────────┘ │
│ │
│ /metrics /healthcheck /debug/pprof │
└─────────────────────────────────────────┘
Packages
ColdBrew is modular — use the full framework or pick individual packages:
| Package | What It Does |
|---|---|
| core | gRPC server + HTTP gateway, health checks, graceful shutdown |
| interceptors | Server/client interceptors for logging, tracing, metrics, retries |
| errors | Enhanced errors with stack traces and gRPC status codes |
| log | Structured logging with pluggable backends |
| tracing | Distributed tracing (OpenTelemetry, Jaeger, New Relic) |
| options | Request-scoped key-value store via context |
| grpcpool | Round-robin gRPC connection pool |
| data-builder | Dependency injection with parallel execution |
| workers | Background worker lifecycle with panic recovery and restart |
Each package can be used independently — you don’t need core to use errors or log.
Don’t Repeat Yourself — Focus on Business Logic
Every Go microservice needs health probes, Prometheus metrics, structured logging, distributed tracing, graceful shutdown, and panic recovery. Without a framework, teams copy-paste this infrastructure into every service — and each copy drifts slightly, making debugging and onboarding harder.
ColdBrew handles all of it. You write business logic, ColdBrew handles everything else:
| You write | ColdBrew handles |
|---|---|
| Proto definitions + business logic | gRPC server + REST gateway via grpc-gateway |
OTLP_ENDPOINT env var | Distributed tracing with automatic span creation via OpenTelemetry |
NEW_RELIC_LICENSE_KEY env var | APM integration via New Relic |
| Error returns | Stack traces, gRPC status codes, async notification to Sentry/Rollbar/Airbrake |
| Nothing | Prometheus metrics at /metrics — per-method latency, error rate, QPS |
| Nothing | Health/ready probes, graceful shutdown, pprof profiling |
| Nothing | Interceptor chain: logging, tracing, metrics, panic recovery |
| Nothing | vtprotobuf codec — up to ~4x faster proto marshal |
| Nothing | HTTP gzip/zstd compression, container-aware GOMAXPROCS |
New services inherit all of this automatically via the cookiecutter template — zero boilerplate to write, zero infrastructure to maintain.
Built on battle-tested libraries
ColdBrew composes proven Go libraries — not replacements:
| Category | Libraries |
|---|---|
| API | grpc + grpc-gateway — gRPC server with automatic REST gateway and Swagger UI |
| Observability | OpenTelemetry + Jaeger — distributed tracing; Prometheus + go-grpc-middleware — metrics |
| Monitoring | New Relic — APM; Sentry — error tracking and alerting |
| Performance | vtprotobuf — fast serialization; klauspost/compress — gzip/zstd HTTP compression |
| Runtime | automaxprocs — container-aware GOMAXPROCS; slog — structured logging |
Next Steps
- Getting Started — Create your first ColdBrew service
- How-To Guides — Step-by-step guides for common tasks
- Production Deployment — Kubernetes, health probes, tracing, and graceful shutdown
- Integrations — Set up monitoring, tracing, and error tracking
- FAQ — Common questions and answers