You are here

How to conduct small sample studies in an era of Big Data?

15 April 2024
4:00 pm
San Francesco Complex - Sagrestia

2nd seminar


Significant concerns related to the replicability of neuroimaging findings are pushing the field towards ever larger sample sizes through data pooling efforts (e.g., ENIGMA or multi-site studies (e.g., Adolescent Brain Cognitive Development, Healthy Brain and Child Development Although many may welcome this trend, likening it to the changes that occurred in genetics research, it raises concerns for how individual researchers with limited resources should conduct their research and especially so if their focus is on clinical samples which typically can’t be recruited in their hundreds or thousands. This presentation will discuss a number of strategies to consider. One includes employing the neuroimaging protocol of the larger study (e.g., ABCD) and utilizing it for the new smaller study. Having the “normative” data of the larger study can benefit the newer study in many ways. Another approach employs “meta-matching,” wherein a classifier trained on the larger dataset can be usefully deployed in the smaller dataset even when the outcome measures differ between the two. Other strategies include maximizing the yield from the new smaller study with, for example, experimental manipulations that give larger effect sizes, employing task probes with improved reliability, obtaining deeper phenotyping and denser temporal sampling with repeated scans, optimizing data yield with adaptive task design algorithms, and improved data analysis that extracts more reliable and robust measures from the neuroimaging data.


Join at:

Hugh Patrick Garavan, University of Vermont