Content Security Policy headers (defense-in-depth)

XSS is still the most common ‘we didn’t think about it’ vulnerability in web apps. A Content-Security-Policy doesn’t replace sanitization, but it dramatically reduces blast radius when something slips through. I start from a strict baseline (no inline

React Error Boundary + error reporting hook

Frontend errors are inevitable, so I’d rather control the blast radius. I wrap unstable sections with an ErrorBoundary that shows a friendly fallback and captures context for reporting. I also avoid spamming the error tracker by including a fingerprin

React Query optimistic update for likes

Interactive UI should feel instant even when the network isn’t. With optimistic updates, I update the cache immediately and roll back if the server rejects the mutation. The trick is to keep optimistic state small and obvious: update a count and a boo

Next.js Route Handler with auth guard

I like API routes that read like tiny, well-scoped controllers. In Next.js Route Handlers, I keep auth and input parsing right at the top, then return explicit status codes instead of throwing for expected failures. I also avoid leaking server-only de

WebSocket server with topic subscriptions (ws)

Raw WebSockets can turn into an unmaintainable mess unless you define a tiny protocol up front. I keep messages typed (even if it’s ‘JSON with a type field’) and I implement topic subscriptions so clients opt into exactly what they need. I track subsc

SSE endpoint for server-to-browser events

When I want ‘real time’ updates without the operational overhead of WebSockets, I reach for Server-Sent Events (SSE). It’s just streaming HTTP, so it behaves well behind proxies and is easy to reason about. I set the right headers (Content-Type: text/

OpenTelemetry tracing for Node HTTP

Once services talk to other services, logs alone don’t tell you where time goes. I like OpenTelemetry because it’s vendor-neutral: export to Honeycomb, Datadog, Tempo—whatever your team uses. I keep the initial setup minimal: instrument http, express,

BullMQ worker with retries + dead-letter

Background jobs will fail in production, so I like having a predictable story for retries and poison messages. BullMQ is a solid middle ground: Redis-backed, straightforward, and good enough for most apps. I set explicit attempts and backoff, and when

Transactional outbox in Node (DB write + event)

The moment you split ‘write to DB’ and ‘publish to a queue’ into two independent operations, you create a place to lose data. Publish first and a DB failure means consumers act on something that never happened. Write first and a publish failure means

Cursor-based pagination with stable ordering

Offset pagination falls apart as soon as rows are inserted or deleted between page fetches—users see duplicates or missing items. Cursor pagination fixes that with stable ordering and ‘seek’ queries. I use a compound cursor that includes both the prim

Rate limiting by IP + user (Express)

A single abusive client can ruin your latency budget for everyone else, so I rate limit early rather than trying to ‘detect abuse’ after the outage starts. I combine an IP bucket with a user bucket: IP protects unauthenticated endpoints, user protects

Redis cache-aside for expensive reads

Most ‘caching’ bugs are really invalidation bugs, so I stick to a simple cache-aside pattern with conservative TTLs and treat cache misses as normal. The big failure mode to avoid is a stampede: if many requests miss at once, you can crush your DB. Fo