The process by which humans improve their ability to discriminate visual information through experience is known as visual perceptual learning (VPL). Visual attention allows us to prioritize relevant information for processing while ignoring irrelevant information. Both VPL and attention improve performance on perceptual tasks, but we are only beginning to understand how they interact. VPL is typically specific to the trained location and feature. But an efficient training regime should promote learning to untrained conditions to maximize training benefits. In this talk, I will present a study that revealed the effects of feature-based attention (FBA) on VPL specificity and discuss how their underlying mechanisms are related. Training with FBA enables robust location transfer, reminiscent of its global effect across the visual field. Remarkably, the training benefits persist over one year. I will also show that fixational eye movements – microsaccades – are an important window into VPL. Microsaccade rates are significantly reduced during the response window after training, and the learning-induced microsaccade changes are long lasting. These results suggest that microsaccades may serve as a long-term, reliable physiological correlate in VPL. Together, these novel findings expand our understanding of FBA’s global modulation from visual perception to VPL, help bridge the gap between visual and oculomotor systems in human learning, and reveal that FBA has translational applications for VPL.