Neuroimaging produces huge amounts of complex data that are used to better understand the relations between brain structure and various cognitive functions. In the long term, the results of this field will be used to better understand, characterize, predict and diagnose brain diseases.
Although some aspects of the acquisition and analysis of these data are gradually being standardized, the neuroimaging community is still largely missing appropriate tools to store and organize the knowledge related to the data. As a consequence, there is little data reuse, which implies that the neuroimaging community wastes important resources, and only the corpus of publications gives an account of the accumulated results. Finding a mechanism to gather and update the information on brain structure and function carried by these data is an important challenge, as the images carry much more information than the synthetic account given in the neuroimaging literature. Current techniques for meta-analysis based on published coordinates are not satisfactory, as the abstraction of brain maps into a few peak activation coordinates discards essential information; using instead the original statistical parametric maps would be more sensitive and more consistent with the current practice of researchers in this field.
Last, the cognitive domain and protocol information that label the data are often too coarse if extracted automatically, or too precise if the entire article description is included. Finding the scale at which results can be usefully accumulated remains a challenge.