from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.preprocessing import StandardScaler
X_scaled = StandardScaler().fit_transform(X)
pca = PCA(n_components=10, random_state=42)
X_pca = pca.fit_transform(X_scaled)
print('explained variance:', pca.explained_variance_ratio_.sum())
tsne = TSNE(n_components=2, perplexity=30, init='pca', learning_rate='auto', random_state=42)
X_tsne = tsne.fit_transform(X_scaled[:2000])
print(X_tsne[:5])
I use dimensionality reduction both as a modeling tool and as an investigative lens. PCA is good for compression and signal inspection; t-SNE is useful when I need to see whether latent clusters or label separation exist at all. I never present those plots as proof, only as directional evidence.