Statistical normalization techniques for magnetic resonance imaging☆☆☆
Shinohara, Russell T.
Sweeney, Elizabeth M.
Calabresi, Peter A.
Pham, Dzung L.
Reich, Daniel S.
Crainiceanu, Ciprian M.
MetadataShow full item record
CitationShinohara, Russell T., Elizabeth M. Sweeney, Jeff Goldsmith, Navid Shiee, Farrah J. Mateen, Peter A. Calabresi, Samson Jarso, Dzung L. Pham, Daniel S. Reich, and Ciprian M. Crainiceanu. 2014. “Statistical normalization techniques for magnetic resonance imaging☆☆☆.” NeuroImage : Clinical 6 (1): 9-19. doi:10.1016/j.nicl.2014.08.008. http://dx.doi.org/10.1016/j.nicl.2014.08.008.
AbstractWhile computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:13454629