gRPC Microservices Tutorial: Build, Secure, and Observe a Production-Ready API in Go
Step-by-step gRPC microservices tutorial: Protobuf design, Go services, TLS/mTLS, deadlines, retries, streaming, observability, Docker, and Kubernetes.
Image used for representation purposes only.
Overview
gRPC is a high-performance RPC framework built on HTTP/2, Protocol Buffers, and code generation. It suits microservices because it provides a strongly typed contract, efficient binary serialization, streaming, and first-class support for deadlines, retries, and observability. In this tutorial you will:
- Design Protobuf contracts for two services: Inventory and Order.
- Generate stubs, implement servers and clients in Go.
- Add deadlines, status-rich errors, retries, and streaming.
- Secure traffic with TLS/mTLS.
- Add logging/metrics tracing and test the system.
- Containerize and run locally (with notes for Kubernetes).
Assumptions: You know basic Go and Docker. Commands target macOS/Linux; adapt as needed for Windows.
Prerequisites
- Go 1.21+
- buf (or protoc) for code generation
- Docker + Docker Compose
- grpcurl (handy for local testing)
- make (optional)
Directory layout:
grpc-micro/
api/ # .proto files and buf configs
services/
inventory/
order/
build/
certs/ # TLS/mTLS materials (dev)
deploy/
docker-compose.yml
Define the Protobuf Contracts
We’ll create two services:
- InventoryService: track stock and stream stock changes
- OrderService: place orders; calls InventoryService to reserve stock
api/inventory/v1/inventory.proto:
syntax = "proto3";
package inventory.v1;
option go_package = "github.com/acme/grpc-micro/gen/inventory/v1;inventoryv1";
message SkuRef { string id = 1; }
message Stock { string sku = 1; int32 available = 2; }
message ReserveRequest { string sku = 1; int32 qty = 2; string correlation_id = 3; }
message ReserveResponse { bool reserved = 1; int32 remaining = 2; }
message WatchRequest { repeated string skus = 1; }
service InventoryService {
rpc GetStock(SkuRef) returns (Stock);
rpc Reserve(ReserveRequest) returns (ReserveResponse);
rpc Watch(WatchRequest) returns (stream Stock); // server streaming
}
api/order/v1/order.proto:
syntax = "proto3";
package order.v1;
option go_package = "github.com/acme/grpc-micro/gen/order/v1;orderv1";
import "inventory/v1/inventory.proto";
message OrderLine { string sku = 1; int32 qty = 2; }
message PlaceOrderRequest { string id = 1; repeated OrderLine lines = 2; }
message PlaceOrderResponse { string id = 1; string status = 2; }
service OrderService {
rpc PlaceOrder(PlaceOrderRequest) returns (PlaceOrderResponse);
}
buf.gen.yaml (codegen):
version: v1
plugins:
- plugin: buf.build/protocolbuffers/go
out: gen
opt: paths=source_relative
- plugin: buf.build/grpc/go
out: gen
opt: paths=source_relative
Generate code:
cd grpc-micro
buf generate
This produces Go types and gRPC server/client interfaces in gen/.
Implement InventoryService (Go)
services/inventory/main.go (simplified):
package main
import (
"context"
"crypto/tls"
"log"
"net"
"time"
inv "github.com/acme/grpc-micro/gen/inventory/v1"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/status"
"google.golang.org/grpc/codes"
)
type store struct { data map[string]int32 }
type server struct { inv.UnimplementedInventoryServiceServer; s *store }
func (s *server) GetStock(ctx context.Context, r *inv.SkuRef) (*inv.Stock, error) {
qty := s.s.data[r.Id]
return &inv.Stock{Sku: r.Id, Available: qty}, nil
}
func (s *server) Reserve(ctx context.Context, r *inv.ReserveRequest) (*inv.ReserveResponse, error) {
if r.Qty <= 0 { return nil, status.Error(codes.InvalidArgument, "qty>0 required") }
if s.s.data[r.Sku] < r.Qty { return &inv.ReserveResponse{Reserved:false, Remaining:s.s.data[r.Sku]}, nil }
s.s.data[r.Sku] -= r.Qty
return &inv.ReserveResponse{Reserved:true, Remaining:s.s.data[r.Sku]}, nil
}
func (s *server) Watch(r *inv.WatchRequest, stream inv.InventoryService_WatchServer) error {
// naive ticker-based stream for demo
for i := 0; i < 10; i++ {
select {
case <-stream.Context().Done(): return stream.Context().Err()
case <-time.After(1*time.Second):
for _, sku := range r.Skus {
stream.Send(&inv.Stock{Sku: sku, Available: s.s.data[sku]})
}
}
}
return nil
}
func main() {
// TLS (server-only for start; see mTLS later)
cert, _ := tls.LoadX509KeyPair("/certs/inventory.crt", "/certs/inventory.key")
creds := credentials.NewTLS(&tls.Config{Certificates: []tls.Certificate{cert}})
lis, err := net.Listen("tcp", ":50051")
if err != nil { log.Fatal(err) }
s := grpc.NewServer(grpc.Creds(creds),
grpc.ChainUnaryInterceptor(loggingUnary),
grpc.ChainStreamInterceptor(loggingStream),
)
inv.RegisterInventoryServiceServer(s, &server{s: &store{data: map[string]int32{"SKU-1":50,"SKU-2":5}}})
log.Println("inventory listening on :50051")
log.Fatal(s.Serve(lis))
}
func loggingUnary(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
st := time.Now()
p, _ := peer.FromContext(ctx)
resp, err := handler(ctx, req)
log.Printf("method=%s peer=%v duration=%s err=%v", info.FullMethod, p.Addr, time.Since(st), err)
return resp, err
}
func loggingStream(srv interface{}, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
st := time.Now()
err := handler(srv, ss)
log.Printf("stream=%s duration=%s err=%v", info.FullMethod, time.Since(st), err)
return err
}
Key points:
- Uses TLS credentials.
- Unary and server-streaming RPCs.
- Basic logging interceptors (plug in OpenTelemetry later).
Implement OrderService (client + server)
OrderService calls Reserve on InventoryService with deadlines and structured error handling.
services/order/main.go (essential parts):
package main
import (
"context"
"crypto/x509"
"encoding/pem"
"io/ioutil"
"log"
"net"
"time"
inv "github.com/acme/grpc-micro/gen/inventory/v1"
ord "github.com/acme/grpc-micro/gen/order/v1"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/status"
"google.golang.org/grpc/codes"
)
type server struct { ord.UnimplementedOrderServiceServer; inv inv.InventoryServiceClient }
func (s *server) PlaceOrder(ctx context.Context, r *ord.PlaceOrderRequest) (*ord.PlaceOrderResponse, error) {
// deadline per request
ctx, cancel := context.WithTimeout(ctx, 400*time.Millisecond)
defer cancel()
for _, l := range r.Lines {
resp, err := s.inv.Reserve(ctx, &inv.ReserveRequest{Sku: l.Sku, Qty: l.Qty, CorrelationId: r.Id})
if err != nil {
st, _ := status.FromError(err)
if st.Code() == codes.DeadlineExceeded { return nil, status.Error(codes.Aborted, "inventory timeout") }
return nil, err
}
if !resp.Reserved { return &ord.PlaceOrderResponse{Id: r.Id, Status: "REJECTED_OUT_OF_STOCK"}, nil }
}
return &ord.PlaceOrderResponse{Id: r.Id, Status: "CONFIRMED"}, nil
}
func main() {
// Trust InventoryService cert (dev self-signed)
ca, _ := ioutil.ReadFile("/certs/ca.crt")
pool := x509.NewCertPool(); pool.AppendCertsFromPEM(ca)
creds := credentials.NewClientTLSFromCert(pool, "inventory.local")
// retry policy via service config
sc := `{"methodConfig":[{"name":[{"service":"inventory.v1.InventoryService"}],"timeout":"0.5s","retryPolicy":{"MaxAttempts":3,"InitialBackoff":"0.05s","MaxBackoff":"0.2s","BackoffMultiplier":1.5,"RetryableStatusCodes":["UNAVAILABLE","DEADLINE_EXCEEDED"]}}]}`
conn, err := grpc.Dial("inventory:50051", grpc.WithTransportCredentials(creds), grpc.WithDefaultServiceConfig(sc))
if err != nil { log.Fatal(err) }
defer conn.Close()
lis, _ := net.Listen("tcp", ":50052")
s := grpc.NewServer()
ord.RegisterOrderServiceServer(s, &server{inv: inv.NewInventoryServiceClient(conn)})
log.Println("order listening on :50052")
log.Fatal(s.Serve(lis))
}
Highlights:
- Per-call deadlines with context timeouts.
- Client-side retry policy using service config.
- Status-code–aware error handling.
Local TLS and mTLS Setup (dev)
For development only; use a proper PKI in prod.
mkdir -p build/certs && cd build/certs
# CA
openssl req -x509 -newkey rsa:2048 -days 365 -nodes -keyout ca.key -out ca.crt -subj "/CN=demo-ca"
# Inventory cert
openssl req -newkey rsa:2048 -nodes -keyout inventory.key -out inventory.csr -subj "/CN=inventory.local"
openssl x509 -req -in inventory.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out inventory.crt -days 365 -extfile <(printf "subjectAltName=DNS:inventory.local,IP:127.0.0.1")
- Server-side TLS: Inventory loads inventory.crt/key.
- Client trust: Order uses ca.crt to validate Inventory.
- For mTLS, issue a client cert and set tls.Config.ClientAuth = RequireAndVerifyClientCert on the server; use grpc.WithTransportCredentials with client certificate on the client.
Run with Docker Compose
deploy/docker-compose.yml (excerpt):
version: "3.9"
services:
inventory:
build: { context: ../, dockerfile: services/inventory/Dockerfile }
ports: ["50051:50051"]
volumes: ["../build/certs:/certs:ro"]
order:
build: { context: ../, dockerfile: services/order/Dockerfile }
ports: ["50052:50052"]
volumes: ["../build/certs:/certs:ro"]
depends_on: [inventory]
Minimal Dockerfile (services/inventory/Dockerfile):
FROM golang:1.22 as build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/inventory ./services/inventory
FROM gcr.io/distroless/base-debian12
COPY --from=build /bin/inventory /inventory
COPY build/certs /certs
EXPOSE 50051
ENTRYPOINT ["/inventory"]
Test with grpcurl:
# Get stock
grpcurl -insecure -authority inventory.local localhost:50051 inventory.v1.InventoryService/GetStock '{"id":"SKU-1"}'
# Place an order
grpcurl -plaintext localhost:50052 order.v1.OrderService/PlaceOrder '{"id":"o-1","lines":[{"sku":"SKU-1","qty":3}]}'
Note: For TLS verification with a custom CA, use -cacert build/certs/ca.crt and omit -insecure.
Observability: Logs, Metrics, Traces
- Logs: Use chained interceptors for structured logs including method, duration, peer, and correlation IDs from metadata.
- Metrics: Expose Prometheus via OpenTelemetry metrics (otelgrpc).
- Traces: Export spans with context propagation.
Server with OpenTelemetry interceptors (snippet):
import (
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)
s := grpc.NewServer(
grpc.StatsHandler(otelgrpc.NewServerHandler()),
)
Client:
conn, _ := grpc.Dial(target,
grpc.WithTransportCredentials(creds),
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)
Also consider:
- Aggregated logs via Fluent Bit/Loki.
- Tracing backends: OTLP to Tempo/Jaeger.
- Correlate order.id across services via metadata.
Error Semantics and Contracts
- Use gRPC status codes thoughtfully:
- InvalidArgument for bad inputs
- NotFound when SKU unknown
- FailedPrecondition for business rule violation
- ResourceExhausted for quota/limits
- Aborted for conflict/timeouts coordinating multiple services
- Attach rich error details using google.rpc.ErrorInfo, RetryInfo, or ResourceInfo.
Example:
st := status.New(codes.FailedPrecondition, "insufficient stock")
// add details here with st.WithDetails(...)
return nil, st.Err()
Streaming Patterns
We implemented server-streaming Watch. Other patterns:
- Client-streaming: upload a batch of inventory adjustments.
- Bidirectional: live order updates between OMS and UI.
Tips:
- Always honor client cancellations.
- Use flow control; send smaller messages.
- For UI/Web, use grpc-web with an Envoy proxy.
Testing Strategy
- Unit test business logic without gRPC first.
- gRPC integration tests with in-memory connections (bufconn):
import (
"testing"
"google.golang.org/grpc/test/bufconn"
)
func TestReserve(t *testing.T) {
lis := bufconn.Listen(1024 * 1024)
s := grpc.NewServer()
inv.RegisterInventoryServiceServer(s, &server{ s: &store{data: map[string]int32{"SKU":1}} })
go s.Serve(lis)
dialer := func(context.Context, string) (net.Conn, error) { return lis.Dial() }
conn, _ := grpc.DialContext(context.Background(), "bufnet", grpc.WithContextDialer(dialer), grpc.WithInsecure())
cli := inv.NewInventoryServiceClient(conn)
// assertions...
}
- Contract tests: ensure clients pass with old and new versions (backward compatibility).
- Use grpc-health-probe in CI to verify readiness.
Deploying to Kubernetes (essentials)
- Use a ClusterIP Service per gRPC Deployment.
- Liveness/readiness probes: gRPC health (grpc.health.v1). Sidecar grpc-health-probe or native healthz endpoints.
- Load balancing: Prefer sidecar or mesh (Envoy, Linkerd, Istio). gRPC over HTTP/2 requires proper LB configuration.
- Service discovery: DNS within cluster; outside, expose via an Ingress/Envoy with HTTP/2 and TLS enabled.
Example readiness probe snippet:
readinessProbe:
exec:
command: ["/bin/grpc-health-probe", "-addr=:50051"]
Performance Tips
- Reuse connections; create one grpc.ClientConn per service and share it.
- Use deadlines on every call; avoid infinite waits.
- Enable gzip or zstd compression for large payloads.
- Keep messages small; avoid huge repeated fields.
- Tune max concurrent streams, keepalive pings for long-lived connections.
- Prefer streaming for continuous updates.
Backward Compatibility and Versioning
- Never repurpose fields; add new fields with new tags.
- Reserve removed field numbers/names in .proto.
- Version packages (inventory.v1 → v2) for breaking changes.
- Roll forward/back via canaries; test with old clients.
Security Checklist
- TLS everywhere; mTLS for service-to-service.
- Validate JWTs in interceptors for end-user identity; propagate claims via metadata.
- Principle of least privilege for service accounts and secrets.
- Rate limiting and quotas at the edge (Envoy/Ingress) and per method.
Troubleshooting
- DeadlineExceeded: check client timeout, server work time, and retries.
- Unavailable: endpoint down or LB misconfigured; confirm HTTP/2 and health.
- Cert errors: verify SANs match authority; rotate certs before expiry.
- Reflection: enable gRPC reflection in dev to explore APIs with grpcurl.
Enable reflection (dev only):
import "google.golang.org/grpc/reflection"
reflection.Register(s)
Where to Go Next
- Add audit logs and idempotency keys to PlaceOrder.
- Implement client-streaming AdjustStock.
- Introduce an API Gateway or grpc-web for browser clients.
- Add OpenAPI via grpc-gateway to support REST for external consumers.
Summary
You built two gRPC microservices with clear Protobuf contracts, efficient unary and streaming calls, resilient clients (deadlines/retries), TLS security, and basic observability. With containers and a few Kubernetes basics, you’re set to scale this foundation into a production-grade platform.
Related Posts
Flutter Firebase Authentication: A Complete, Modern Guide
A complete, pragmatic guide to Firebase Authentication in Flutter: setup, email, Google, Apple, phone, linking, security, and testing.
iOS 26.3.1 is out: What’s new, who should update, and what to expect
Apple releases iOS 26.3.1 with bug fixes and Studio Display support—no new CVEs. Here’s what’s in the update, who should install it, and how to get it.
What to Know About iOS 26.4 Beta: A Practical Guide for Developers and Enthusiasts
A practical guide to iOS 26.4 beta: testing, release cadence, and developer considerations for a safe, productive beta experience.