from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = 'distilbert-base-uncased-finetuned-sst-2-english'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = pipeline(
task='text-classification',
model=model,
tokenizer=tokenizer,
truncation=True,
)
texts = [
'The checkout flow is fast and intuitive.',
'Customer support never replied to my ticket.',
]
print(classifier(texts))
I use transformers when the text task justifies contextual modeling and the serving budget can handle it. The fastest path to value is usually starting with pretrained checkpoints, measuring latency, and then deciding whether quantization, distillation, or simpler baselines are sufficient. Fancy models still need boring operational discipline.