A statistical model is presented that combines the registration of an atlas with the segmentation of magnetic resonance images. We use an Expectation Maximization-based algorithm to find a solution within the model, which simultaneously estimates image artifacts, anatomical labelmaps, and a structure-dependent hierarchical mapping from the atlas to the image space. The algorithm produces segmentations for brain tissues as well as their substructures. We demonstrate the approach on a set of 22 magnetic resonance images. On this set of images, the new approach performs significantly better than similar methods which sequentially apply registration and segmentation. D 2005 Elsevier Inc. All rights reserved.Keywords: Registration; Segmentation; Subcortical segmentation; Bayesian modeling; Expectation -Maximization
IntroductionTo better understand brain diseases, many neuroscience studies focus on the anatomical differences between control and diseased subjects. In order to find these differences, scientists often analyze medical images for brain structures which seem to be influenced by the disease. The analysis is frequently based on segmentations of the structures of interest that are mostly performed by human experts. However, this manual process is not only very expensive, but in addition, it increases risks related to inter-and intra-observer reliability (Kikinis et al., 1992). Neuroscientists are keenly interested in automatic methods, which often rely on prior information, to perform this task (Collins et al., 1999;Leventon et al., 2000;Marroquin et al., 2003;Fischl et al., 2004;Pohl et al., 2004a;Ashburner and Friston, 2005). With notable exceptions, these methods first register the prior information, i.e., an atlas, to the medical image and then segment the medical image into anatomical structures based on that aligned information. The goal of this work is to unify this process into a single Bayesian framework in order to overcome biases caused by commitment to the initial registration.When automatic segmentation methods are guided by prior information, they frequently are used to segment anatomical structures defined by weakly visible boundaries in medical images. For example, the intensity properties of the thalamus in T1-weighted magnetic resonance (MR) images are very similar to those of the neighboring white matter (Fig. 1). Algorithms cannot rely on the MR images alone in order to distinguish these two structures. However, the ventricles, the dark structures above the thalamus, are more easily identified. In order for the ventricles to guide the detection of the boundary between the thalamus and the white matter, automatic segmentation algorithms use spatial priors (Mazziotta et al., 1995;Thompson et al., 1996). These spatial priors capture the relationship between structures such as the fact that the ventricles are above the thalamus.As mentioned previously, most atlas-based algorithms perform registration and segmentation sequentially (Cocosco et al., 2003;Van Leemput et al., 1999;Fischl et al., 200...