Retinal projections often poorly represent the structure of the physical world: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for visual cortex is to sort these two types of features according to their utility in interpreting the scene. We describe a novel paradigm that enables selective evaluation of the relative role played by these two feature classes in signal reconstruction from corrupted natural images. Our behavioural and EEG measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast, on the temporal scale of segmentation-related signals in V2 neurons. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Our results challenge compartmentalized notions of bottom-up/top-down object segmentation, and suggest instead that these two modes are best viewed as an integrated perceptual mechanism.