Tammy Riklin-Raviv (Medical Vision Group CSAIL, MIT & Surgical Planning Laboratory, Harvard Medical School)
Tuesday, 12.1.2010, 11:30
The images acquired via medical imaging modalities are frequently subject to low signal-to-noise ratio, bias field and partial volume effects. These artifacts, together with the naturally low contrast between image intensities of some neighboring structures, make the extraction of regions of interest (ROIs) in clinical images a challenging problem.
Probabilistic atlases, typically generated from comprehensive sets of manually labeled examples, facilitate the analysis by providing statistical priors for tissue classiﬁcation and structure segmentation. However, the limited availability of
training examples that are compatible with the images to be segmented renders the atlas-based approaches impractical in many cases.
In the talk I will present a generative model for joint segmentation of corresponding regions of interest in a collec- tion of aligned images that does not require labeled training data. Instead, the evolving segmentation of the entire image set supports each of the individual segmentations. This is made possible by iteratively inferring a subset of the model parameters, called the spatial parameters, as part of the joint segmentation processes. These spatial parameters are defined in the image domain and can be viewed as a latent atlas, that is used as a spatial prior on the tissue labels. Our latent atlas formulation is based on probabilistic principles, but we solve it using partial differential
equations (PDEs) and energy minimization criteria. We evaluate the method successfully for the segmentation of cortical and subcortical structures within different populations and of brain tumors in a single-subject multi-modal longitudinal experiment.