Table of contents
Running Tests
The ColdBrew cookiecutter generates a project with tests ready to go:
make test # Run tests with race detector + coverage
make bench # Run benchmarks (10s per benchmark, with memory stats)
make lint # golangci-lint + govulncheck
make test runs with -race and generates a cover.out coverage profile. Both GitHub Actions and GitLab CI pipelines run these automatically on every push and pull request.
Writing Unit Tests
Tests live alongside the code they test. The generated project includes tests in service/service_test.go and service/healthcheck_test.go.
Test pattern
Use testify/assert for assertions. config.Get() is a helper generated by the cookiecutter template in config/config.go — it wraps ColdBrew’s config.GetColdBrewConfig() with your app-specific fields:
func TestEcho(t *testing.T) {
s, err := New(config.Get())
assert.NoError(t, err)
assert.NotNil(t, s)
resp, err := s.Echo(context.Background(), &proto.EchoRequest{Msg: "hello"})
assert.NoError(t, err)
assert.Equal(t, "hello", resp.Msg)
}
Testing error paths
Always test both success and error cases:
func TestError(t *testing.T) {
s, err := New(config.Get())
assert.NoError(t, err)
resp, err := s.Error(context.Background(), nil)
assert.Error(t, err)
assert.Nil(t, resp)
}
Testing health checks
The service starts as NOT_SERVING. Use SetReady() and SetNotReady() to control the readiness state:
func TestReadyCheck(t *testing.T) {
s, err := New(config.Get())
assert.NoError(t, err)
SetNotReady()
data, err := s.ReadyCheck(context.Background(), nil)
assert.Error(t, err)
SetReady()
data, err = s.ReadyCheck(context.Background(), nil)
assert.NoError(t, err)
assert.NotEmpty(t, data.Data)
}
Mock Generation
The project uses mockery to generate mocks for interfaces. Mocks are configured in .mockery.yaml and output to misc/mocks/.
Generating mocks
make mock
This generates mock implementations for all interfaces in the service/ package. Re-run after adding or changing interfaces.
Configuration
The .mockery.yaml file controls mock generation:
with-expecter: true # Generate type-safe expecter methods
all: true # Mock all interfaces in the package
dir: misc/mocks/ # Output directory
outpkg: "mocks" # Package name for generated mocks
packages:
github.com/yourname/yourapp/service:
config:
recursive: true
Using mocks in tests
Import the generated mocks and use the expecter pattern for type-safe expectations:
import "github.com/yourname/yourapp/misc/mocks"
func TestWithMock(t *testing.T) {
m := mocks.NewMyInterface(t)
// Set up expectations using the expecter
m.EXPECT().DoSomething("input").Return("output", nil)
// Pass the mock to the code under test
result, err := MyFunction(m)
assert.NoError(t, err)
assert.Equal(t, "output", result)
}
Mocks are auto-cleaned up when the test finishes. If an expected call wasn’t made, the test fails automatically.
Benchmarks
Benchmarks live in test files alongside unit tests. The generated project includes BenchmarkEcho in service/service_test.go.
Writing a benchmark
func BenchmarkEcho(b *testing.B) {
cfg := config.Get()
s, err := New(cfg)
if err != nil {
b.Fatal(err)
}
ctx := context.Background()
req := &proto.EchoRequest{Msg: "hello"}
b.ResetTimer() // Exclude setup time from measurement
for i := 0; i < b.N; i++ {
resp, err := s.Echo(ctx, req)
if err != nil {
b.Fatal(err)
}
_ = resp
}
}
Key points:
- Do setup before
b.ResetTimer()to exclude it from timing - Use
b.Fatal()for errors (nott.Fatal()) - Keep the hot loop minimal — only the code you’re measuring
Running benchmarks
make bench # All benchmarks (10s each, with memory stats)
go test -bench=BenchmarkEcho -benchmem ./service/... # Single benchmark
Coverage
Local coverage report
make test # Generates cover.out
make coverage-html # Opens interactive HTML report (cover.html)
The HTML report highlights covered and uncovered lines — useful for spotting gaps.
CI coverage
Both CI pipelines convert cover.out to Cobertura XML format for reporting:
-
GitHub Actions — uploads
cover.xmlas a build artifact - GitLab CI — uploads Cobertura report with coverage percentage extracted from the test output
Coverage scope
make test measures coverage across your application packages:
go test -race -coverpkg=.,./config/...,./service/... -coverprofile cover.out ./...
To add coverage for new packages, append them to the -coverpkg flag in the Makefile.