Dense embeddings help when lexical overlap is weak but semantic similarity matters. I use them for retrieval prototypes, clustering, and feature enrichment when transformer infrastructure is overkill. The main discipline is keeping training data clean and checking nearest neighbors for obvious failure modes.