


Modern machine learning models often struggle to remain reliable under challenging conditions such as distribution shifts, adversarial perturbations, or limited data. My research focuses on improving robustness and generalization by leveraging task-specific structure at inference time, without requiring additional training or data. I present methods that adapt either the inputs or the context of pretrained models to better align with the underlying task. In the visual domain, these approaches enhance robustness by transforming or projecting inputs toward meaningful class- or data-manifold representations. In the language domain, they refine prompt representations to extract more effective information from few-shot examples. Together, these contributions demonstrate a unified principle: carefully adapting representations to reflect task-relevant structure can substantially improve the reliability and generalization of modern machine learning systems across both vision and language.