Password hashing with bcrypt and a calibrated cost

Never store passwords as raw strings, and don’t invent your own hashing scheme. I use bcrypt with a cost that’s calibrated for the environment (fast enough for login throughput, slow enough to resist offline cracking). The trick is to treat the cost a

Config parsing with env defaults and strict validation

Config bugs are some of the most expensive production incidents because they vary by environment and can be hard to reproduce. I keep configuration in a typed struct, load it from environment variables, and validate it before the server starts. The va

Consistent JSON responses (content-type + error envelopes)

One of the easiest ways to reduce frontend complexity is to be consistent about API responses. I keep a small helper that always sets Content-Type: application/json; charset=utf-8, uses a stable error envelope (error + optional details), and returns c

ETag handling for conditional GETs (cheap caching)

ETags are a low-effort way to cut bandwidth and CPU when clients poll for resources that rarely change. The server computes an ETag for the representation (often a version, content hash, or updated_at value) and compares it to If-None-Match. If they m

Server-Sent Events (SSE) with heartbeats and client cleanup

SSE is my go-to for “live updates” when I don’t need full bidirectional WebSockets. The key is to set the right headers (Content-Type: text/event-stream, Cache-Control: no-cache) and to flush periodically so intermediaries don’t buffer. I send heartbe

Concurrency limiting with a context-aware semaphore

If you fan out work (HTTP calls, DB reads, image processing), the failure mode isn’t just “slow,” it’s “everything gets slow” because you saturate CPU or downstream connections. A semaphore is a simple way to cap concurrency. The important part is mak

Circuit breaker around flaky dependencies

Retries alone can make an outage worse: if a dependency is hard failing, retries just add load. A circuit breaker adds a simple state machine: closed (normal), open (fail fast), and half-open (probe). I like gobreaker because it’s small and predictabl

HTTP server timeouts that prevent slowloris and stuck connections

The default http.Server will happily keep connections open longer than you intended, which is how you end up with “mysterious” goroutine growth during partial outages. I set ReadHeaderTimeout to protect against slowloris-style attacks, keep IdleTimeou

Safe dynamic SQL with squirrel (optional filters, stable ordering)

Endpoints with optional filters often devolve into messy SQL string concatenation. I prefer building queries with squirrel so I can conditionally add WHERE clauses while keeping the final query parameterized. The pattern also helps keep ordering stabl

pgxpool initialization with max connections and statement timeout

Postgres stability depends on respecting its limits. I configure pgxpool with explicit MaxConns and MaxConnLifetime so the service doesn't accidentally open too many connections during bursts. I also set a session statement_timeout in AfterConnect, wh

Prometheus metrics middleware capturing status + duration

I like to start observability with two metrics: request duration and response codes. The wrapper below intercepts WriteHeader to capture status codes and then records both a histogram observation and a counter increment. The biggest gotcha is label ca

OpenTelemetry tracer provider with ratio-based sampling

To get useful traces, you need propagation and a real exporter. I set a global TextMapPropagator (TraceContext + Baggage) so inbound headers connect spans across services. Then I configure an OTLP exporter and a batch span processor so tracing overhea