Since its early days, neuroimaging studies have been interested in localizing the neural underpinnings of several cognitive functions. However, several objections have been raised against the project to seek for systematic one-to-one mappings (1-1M) between cognitive functions and neural structures. While renegotiating the taxonomy of either neural structure or cognitive functions might improve systematicity, brain structures seem inherently pluripotent (i.e. involved in multiple cognitive functions), and cognitive functions sometimes exhibit degeneracy (i.e. they might be implemented in distinct neural substrates). To face this issue, some scholars have argued for abandoning 1-1M in favor of probabilistic many-to-many mappings. Another option however, consists in seeking for systematic mapping by adding a further variable, representing additional contextual constraints. In my talk I will argue in favor of such strategy, and focus on a particular class of contextual variables: namely, the ontogenic history of individual brain. I will show that ontogeny does tune (and retune) the functional role of neural structures. This has been frequently neglected because of the (often implicit) assumption of the existence of a "normal brain" This normality assumption, however, restricts the possibility of establishing mappings to only a subset of elementary and universal cognitive processes, and to a subset of populations. Finally, I will address the possible objection that doing without the normality assumption threatens to turn neuroscience into a mere description of individual brains, and counter it by appealing to the notion of "common developmental trajectory", that allows for ontogenic-constrained function-structure mappings.