node

Streaming CSV import (Node streams)

The first time a CSV import OOM’d a production process, I stopped trusting ‘just read it into memory’. Now I stream the upload to disk (or S3), stream-parse rows, and batch inserts into the DB. The win is that memory usage stays flat, even when the fi

BullMQ job idempotency via dedupe id

Retries are great, but duplicates are inevitable—workers crash, Redis reconnects, deploys happen. I prefer building idempotency into the job key itself. BullMQ supports a jobId that acts like a de-dupe key: if a job with the same id already exists, en

GitHub Actions: cache + tests + build

CI has to be fast enough that developers don’t bypass it. I cache npm’s package store so we’re not re-downloading the world every run, and I split lint / test / build into separate steps so failures are obvious and logs are readable. The other non-neg

WebSocket server with topic subscriptions (ws)

Raw WebSockets can turn into an unmaintainable mess unless you define a tiny protocol up front. I keep messages typed (even if it’s ‘JSON with a type field’) and I implement topic subscriptions so clients opt into exactly what they need. I track subsc

Password hashing with Argon2

Bcrypt is fine, but Argon2 is the modern default with better resistance to GPU attacks. I store the full hash string (it includes parameters + salt) and keep verification in one utility so the rest of the app doesn’t grow its own auth helpers. The imp

Multipart upload streaming (busboy)

Multipart uploads can blow up memory if you parse them naively. With busboy, I stream file data as it arrives and enforce size limits and content-type checks early. I avoid writing to disk unless I need it; for many flows I stream directly to object s

Graceful shutdown for Node HTTP servers

Deploys got a lot calmer once I treated SIGTERM as a first-class signal instead of an afterthought. In Kubernetes (and most PaaS platforms), you’re expected to stop accepting new requests quickly while finishing in-flight work. The worst failure mode

HTTP client timeout with AbortController (fetch)

Unbounded network calls eventually will hang, and then your Node process gets stuck with slow requests chewing up the connection pool. I wrap fetch with an AbortController timeout so every outbound call has an upper bound. The key is distinguishing be

Rate limiting by IP + user (Express)

A single abusive client can ruin your latency budget for everyone else, so I rate limit early rather than trying to ‘detect abuse’ after the outage starts. I combine an IP bucket with a user bucket: IP protects unauthenticated endpoints, user protects

Batched writes with COPY (conceptual)

Row-by-row inserts are painfully slow for big ingests. Postgres COPY is a great bulk-ingestion tool, and in Node you can stream into COPY using libraries like pg-copy-streams. The important part is validating before you stream, because once you’re in

Repository pattern for DB access (small, pragmatic)

Not every codebase needs full-blown DDD, but I still want a clean seam between business logic and SQL. A tiny repository module per aggregate gives me that seam, and it makes testing easier because I can stub repository methods rather than mocking the