Three-dimensional alignment and merging of confocal microscopy stacks|
N. Ramesh, T. Tasdizen. In 2013 IEEE International Conference on Image Processing, IEEE, September, 2013.
We describe an efficient, robust, automated method for image alignment and merging of translated, rotated and flipped con-focal microscopy stacks. The samples are captured in both directions (top and bottom) to increase the SNR of the individual slices. We identify the overlapping region of the two stacks by using a variable depth Maximum Intensity Projection (MIP) in the z dimension. For each depth tested, the MIP images gives an estimate of the angle of rotation between the stacks and the shifts in the x and y directions using the Fourier Shift property in 2D. We use the estimated rotation angle, shifts in the x and y direction and align the images in the z direction. A linear blending technique based on a sigmoidal function is used to maximize the information from the stacks and combine them. We get maximum information gain as we combine stacks obtained from both directions.
Uncertainty Visualization in HARDI based on Ensembles of ODFs|
F. Jiao, J.M. Phillips, Y. Gur, C.R. Johnson. In Proceedings of 2013 IEEE Pacific Visualization Symposium, pp. 193--200. 2013.
PubMed ID: 24466504
PubMed Central ID: PMC3898522
In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes.
Constrained Spectral Clustering for Image Segmentation|
J. Sourati, D.H. Brooks, J.G. Dy, E. Erdogmus. In IEEE International Workshop on Machine Learning for Signal Processing, pp. 1--6. 2013.
Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate the algorithm on general and medical images to show that the segmentation results will improve using constrained clustering even if one works with a subset of pixels. Furthermore, this happens more efficiently when pixels to be labeled are selected actively.
Topology analysis of time-dependent multi-fluid data using the Reeb graph|
F. Chen, H. Obermaier, H. Hagen, B. Hamann, J. Tierny, V. Pascucci. In Computer Aided Geometric Design, Vol. 30, No. 6, pp. 557--566. 2013.
Liquid–liquid extraction is a typical multi-fluid problem in chemical engineering where two types of immiscible fluids are mixed together. Mixing of two-phase fluids results in a time-varying fluid density distribution, quantitatively indicating the presence of liquid phases. For engineers who design extraction devices, it is crucial to understand the density distribution of each fluid, particularly flow regions that have a high concentration of the dispersed phase. The propagation of regions of high density can be studied by examining the topology of isosurfaces of the density data. We present a topology-based approach to track the splitting and merging events of these regions using the Reeb graphs. Time is used as the third dimension in addition to two-dimensional (2D) point-based simulation data. Due to low time resolution of the input data set, a physics-based interpolation scheme is required in order to improve the accuracy of the proposed topology tracking method. The model used for interpolation produces a smooth time-dependent density field by applying Lagrangian-based advection to the given simulated point cloud data, conforming to the physical laws of flow evolution. Using the Reeb graph, the spatial and temporal locations of bifurcation and merging events can be readily identified supporting in-depth analysis of the extraction process.
Keywords: Multi-phase fluid, Level set, Topology method, Point-based multi-fluid simulation
Exploring Power Behaviors and Trade-offs of In-situ Data Analytics|
M. Gamell, I. Rodero, M. Parashar, J.C. Bennett, H. Kolla, J.H. Chen, P.-T. Bremer, A. Landge, A. Gyulassy, P. McCormick, Scott Pakin, Valerio Pascucci, Scott Klasky. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Association for Computing Machinery, 2013.
As scientific applications target exascale, challenges related to data and energy are becoming dominating concerns. For example, coupled simulation workflows are increasingly adopting in-situ data processing and analysis techniques to address costs and overheads due to data movement and I/O. However it is also critical to understand these overheads and associated trade-offs from an energy perspective. The goal of this paper is exploring data-related energy/performance trade-offs for end-to-end simulation workflows running at scale on current high-end computing systems. Specifically, this paper presents: (1) an analysis of the data-related behaviors of a combustion simulation workflow with an in-situ data analytics pipeline, running on the Titan system at ORNL; (2) a power model based on system power and data exchange patterns, which is empirically validated; and (3) the use of the model to characterize the energy behavior of the workflow and to explore energy/performance trade-offs on current as well as emerging systems.
Probabilistic Principal Geodesic Analysis|
M. Zhang, P.T. Fletcher. In Proceedings of the 2013 Conference on Neural Information Processing Systems (NIPS), pp. (accepted). 2013.
Principal geodesic analysis (PGA) is a generalization of principal component analysis (PCA) for dimensionality reduction of data on a Riemannian manifold. Currently PGA is defined as a geometric fit to the data, rather than as a probabilistic model. Inspired by probabilistic PCA, we present a latent variable model for PGA that provides a probabilistic framework for factor analysis on manifolds. To compute maximum likelihood estimates of the parameters in our model, we develop a Monte Carlo Expectation Maximization algorithm, where the expectation is approximated by Hamiltonian Monte Carlo sampling of the latent variables. We demonstrate the ability of our method to recover the ground truth parameters in simulated sphere data, as well as its effectiveness in analyzing shape variability of a corpus callosum data set from human brain images.
Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks|
M. Seyedhosseini, M. Sajjadi, T. Tasdizen. In Proceedings of the IEEE International Conference on Computer Vison (ICCV 2013), pp. (accepted). 2013.
Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM; therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against overfitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-theart classifiers and can be used in the CHM to improve object segmentation performance.
Modeling and Analysis of Longitudinal Multimodal Magnetic Resonance Imaging: Application to Early Brain Development|
N. Sadeghi. Note: Ph.D. Thesis, Department of Bioengineering, University of Utah, December, 2013.
Many mental illnesses are thought to have their origins in early stages of development, encouraging increased research efforts related to early neurodevelopment. Magnetic resonance imaging (MRI) has provided us with an unprecedented view of the brain in vivo. More recently, diffusion tensor imaging (DTI/DT-MRI), a magnetic resonance imaging technique, has enabled the characterization of the microstrucutral organization of tissue in vivo. As the brain develops, the water content in the brain decreases while protein and fat content increases due to processes such as myelination and axonal organization. Changes of signal intensity in structural MRI and diffusion parameters of DTI reflect these underlying biological changes.
Longitudinal neuroimaging studies provide a unique opportunity for understanding brain maturation by taking repeated scans over a time course within individuals. Despite the availability of detailed images of the brain, there has been little progress in accurate modeling of brain development or creating predictive models of structure that could help identify early signs of illness. We have developed methodologies for the nonlinear parametric modeling of longitudinal structural MRI and DTI changes over the neurodevelopmental period to address this gap. This research provides a normative model of early brain growth trajectory as is represented in structural MRI and DTI data, which will be crucial to understanding the timing and potential mechanisms of atypical development. Growth trajectories are described via intuitive parameters related to delay, rate of growth and expected asymptotic values, all descriptive measures that can answer clinical questions related to quantitative analysis of growth patterns. We demonstrate the potential of the framework on two clinical studies: healthy controls (singletons and twins) and children at risk of autism. Our framework is designed not only to provide qualitative comparisons, but also to give researchers and clinicians quantitative parameters and a statistical testing scheme. Moreover, the method includes modeling of growth trajectories of individuals, resulting in personalized profiles. The statistical framework also allows for prediction and prediction intervals for subject-specific growth trajectories, which will be crucial for efforts to improve diagnosis for individuals and personalized treatment.
Keywords: autism, brain development, image analysis
Multi-class Multi-scale Series Contextual Model for Image Segmentation|
M. Seyedhosseini, T. Tasdizen. In IEEE Transactions on Image Processing, Vol. PP, No. 99, 2013.
Contextual information has been widely used as a rich source of information to segment multiple objects in an image. A contextual model utilizes the relationships between the objects in a scene to facilitate object detection and segmentation. However, using contextual information from different objects in an effective way for object segmentation remains a difficult problem. In this paper, we introduce a novel framework, called multi-class multi-scale (MCMS) series contextual model, which uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates cross-object and inter-object information into one probabilistic framework and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. We demonstrate that our MCMS model improves object segmentation performance in electron microscopy images and provides a coherent segmentation of multiple objects. By speeding up the segmentation process, the proposed method will allow neurobiologists to move beyond individual specimens and analyze populations paving the way for understanding neurodegenerative diseases at the microscopic level.
Modeling 4D changes in pathological anatomy using domain adaptation: analysis of TBI imaging using a tumor database|
Bo Wang, M. Prastawa, A. Saha, S.P. Awate, A. Irimia, M.C. Chambers, P.M. Vespa, J.D. Van Horn, V. Pascucci, G. Gerig. In Proceedings of the 2013 MICCAI-MBIA Workshop, Lecture Notes in Computer Science (LNCS), Vol. 8159, Note: Awarded Best Paper!, pp. 31--39. 2013.
Analysis of 4D medical images presenting pathology (i.e., lesions) is signi cantly challenging due to the presence of complex changes over time. Image analysis methods for 4D images with lesions need to account for changes in brain structures due to deformation, as well as the formation and deletion of new structures (e.g., edema, bleeding) due to the physiological processes associated with damage, intervention, and recovery. We propose a novel framework that models 4D changes in pathological anatomy across time, and provides explicit mapping from a healthy template to subjects with pathology. Moreover, our framework uses transfer learning to leverage rich information from a known source domain, where we have a collection of completely segmented images, to yield effective appearance models for the input target domain. The automatic 4D segmentation method uses a novel domain adaptation technique for generative kernel density models to transfer information between different domains, resulting in a fully automatic method that requires no user interaction. We demonstrate the effectiveness of our novel approach with the analysis of 4D images of traumatic brain injury (TBI), using a synthetic tumor database as the source domain.
Analysis of Diffusion Tensor Imaging for Subjects with Down Syndrome|
N. Sadeghi, C. Vachet, M. Prastawa, J. Korenberg, G. Gerig. In Proceedings of the 19th Annual Meeting of the Organization for Human Brain Mapping OHBM, pp. (in print). 2013.
Down syndrome (DS) is the most common chromosome abnormality in humans. It is typically associated with delayed cognitive development and physical growth. DS is also associated with Alzheimer-like dementia . In this study we analyze the white matter integrity of individuals with DS compared to control as is reflected in the diffusion parameters derived from Diffusion Tensor Imaging. DTI provides relevant information about the underlying tissue, which correlates with cognitive function . We present a cross-sectional analysis of white matter tracts of subjects with DS compared to control.
A longitudinal structural MRI study of change in regional contrast in Autism Spectrum Disorder|
A. Vardhan, J. Piven, M. Prastawa, G. Gerig. In Proceedings of the 19th Annual Meeting of the Organization for Human Brain Mapping OHBM, pp. (in print). 2013.
The brain undergoes tremendous changes in shape, size, structure, and chemical composition, between birth and 2 years of age [Rutherford, 2001]. Existing studies have focused on morphometric and volumetric changes to study the early developing brain. Although there have been some recent appearance studies based on intensity changes [Serag et al., 2011], these are highly dependent on the quality of normalization. The study we present here uses the changes in contrast between gray and white matter tissue intensities in structural MRI of the brain, as a measure of regional growth [Vardhan et al., 2011]. Kernel regression was used to generate continuous curves characterizing the changes in contrast with time. A statistical analysis was then performed on these curves, comparing two population groups : (i) HR+ : high-risk subjects who tested positive for Autism Spectrum Disorder (ASD), and (ii) HR- : high-risk subjects who tested negative for ASD.
|Abnormal brain synchrony in Down Syndrome,
J.S. Anderson, J.A. Nielsen, M.A. Ferguson, M.C. Burback, E.T. Cox, L. Dai, G. Gerig, J.O. Edgin, J.R. Korenberg. In NeuroImage: Clinical, Vol. 2, pp. 703--715. 2013.
Down Syndrome is the most common genetic cause for intellectual disability, yet the pathophysiology of cognitive impairment in Down Syndrome is unknown. We compared fMRI scans of 15 individuals with Down Syndrome to 14 typically developing control subjects while they viewed 50 min of cartoon video clips. There was widespread increased synchrony between brain regions, with only a small subset of strong, distant connections showing underconnectivity in Down Syndrome. Brain regions showing negative correlations were less anticorrelated and were among the most strongly affected connections in the brain. Increased correlation was observed between all of the distributed brain networks studied, with the strongest internetwork correlation in subjects with the lowest performance IQ. A functional parcellation of the brain showed simplified network structure in Down Syndrome organized by local connectivity. Despite increased interregional synchrony, intersubject correlation to the cartoon stimuli was lower in Down Syndrome, indicating that increased synchrony had a temporal pattern that was not in response to environmental stimuli, but idiosyncratic to each Down Syndrome subject. Short-range, increased synchrony was not observed in a comparison sample of 447 autism vs. 517 control subjects from the Autism Brain Imaging Exchange (ABIDE) collection of resting state fMRI data, and increased internetwork synchrony was only observed between the default mode and attentional networks in autism. These findings suggest immature development of connectivity in Down Syndrome with impaired ability to integrate information from distant brain regions into coherent distributed networks.
|Diffusion imaging quality control via entropy of principal direction distribution,
M. Farzinfar, I. Oguz, R.G. Smith, A.R. Verde, C. Dietrich, A. Gupta, M.L. Escolar, J. Piven, S. Pujol, C. Vachet, S. Gouttard, G. Gerig, S. Dager, R.C. McKinstry, S. Paterson, A.C. Evans, M.A. Styner. In NeuroImage, Vol. 82, pp. 1--12. 2013.
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies.
Keywords: Diffusion magnetic resonance imaging, Diffusion tensor imaging, Quality assessment, Entropy
Watershed Merge Forest Classification for Electron Microscopy Image Stack Segmentation|
T. Liu, M. Seyedhosseini, M. Ellisman, T. Tasdizen. In Proceedings of the 2013 International Conference on Image Processing, 2013.
Automated electron microscopy (EM) image analysis techniques can be tremendously helpful for connectomics research. In this paper, we extend our previous work  and propose a fully automatic method to utilize inter-section information for intra-section neuron segmentation of EM image stacks. A watershed merge forest is built via the watershed transform with each tree representing the region merging hierarchy of one 2D section in the stack. A section classifier is learned to identify the most likely region correspondence between adjacent sections. The inter-section information from such correspondence is incorporated to update the potentials of tree nodes. We resolve the merge forest using these potentials together with consistency constraints to acquire the final segmentation of the whole stack. We demonstrate that our method leads to notable segmentation accuracy improvement by experimenting with two types of EM image data sets.
Y. Wan, H. Otsuna, C.D. Hansen. In Computer Graphics Forum, Vol. 32, No. 3pt4, Wiley-Blackwell, pp. 471--480. jun, 2013.
Brainbow is a genetic engineering technique that randomly colorizes cells. Biological samples processed with this technique and imaged with confocal microscopy have distinctive colors for individual cells. Complex cellular structures can then be easily visualized. However, the complexity of the Brainbow technique limits its applications. In practice, most confocal microscopy scans use different florescence staining with typically at most three distinct cellular structures. These structures are often packed and obscure each other in rendered images making analysis difficult. In this paper, we leverage a process known as GPU framebuffer feedback loops to synthesize Brainbow-like images. In addition, we incorporate ID shuffling and Monte-Carlo sampling into our technique, so that it can be applied to single-channel confocal microscopy data. The synthesized Brainbow images are presented to domain experts with positive feedback. A user survey demonstrates that our synthetic Brainbow technique improves visualizations of volume data with complex structures for biologists.
DTI Quality Control Assessment via Error Estimation From Monte Carlo Simulations|
M. Farzinfar, Y. Li, A.R. Verde, I. Oguz, G. Gerig, M.A. Styner. In Proceedings of SPIE 8669, Medical Imaging 2013: Image Processing, Vol. 8669, 2013.
PubMed ID: 23833547
PubMed Central ID: PMC3702180
Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.
3D of brain shape and volume after cranial vault remodeling surgery for craniosynostosis correction in infants|
B. Paniagua, O. Emodi, J. Hill, J. Fishbaugh, L.A. Pimenta, S.R. Aylward, E. Andinet, G. Gerig, J. Gilmore, J.A. van Aalst, M. Styner. In Proceedings of SPIE 8672, Medical Imaging 2013: Biomedical Applications in Molecular, Structural, and Functional Imaging, 86720V, 2013.
The skull of young children is made up of bony plates that enable growth. Craniosynostosis is a birth defect that causes one or more sutures on an infant’s skull to close prematurely. Corrective surgery focuses on cranial and orbital rim shaping to return the skull to a more normal shape. Functional problems caused by craniosynostosis such as speech and motor delay can improve after surgical correction, but a post-surgical analysis of brain development in comparison with age-matched healthy controls is necessary to assess surgical outcome. Full brain segmentations obtained from pre- and post-operative computed tomography (CT) scans of 8 patients with single suture sagittal (n=5) and metopic (n=3), nonsyndromic craniosynostosis from 41 to 452 days-of-age were included in this study. Age-matched controls obtained via 4D acceleration-based regression of a cohort of 402 full brain segmentations from healthy controls magnetic resonance images (MRI) were also used for comparison (ages 38 to 825 days). 3D point-based models of patient and control cohorts were obtained using SPHARM-PDM shape analysis tool. From a full dataset of regressed shapes, 240 healthy regressed shapes between 30 and 588 days-of-age (time step = 2.34 days) were selected. Volumes and shape metrics were obtained for craniosynostosis and healthy age-matched subjects. Volumes and shape metrics in single suture craniosynostosis patients were larger than age-matched controls for pre- and post-surgery. The use of 3D shape and volumetric measurements show that brain growth is not normal in patients with single suture craniosynostosis.
UNC-Utah NA-MIC DTI framework: atlas based fiber tract analysis with application to a study of nicotine smoking addiction|
A.R. Verde, J.-B. Berger, A. Gupta, M. Farzinfar, A. Kaiser, V.W. Chanon, C. Boettiger, H. Johnson, J. Matsui, A. Sharma, C. Goodlett, Y. Shi, H. Zhu, G. Gerig, S. Gouttard, C. Vachet, M. Styner. In Proc. SPIE 8669, Medical Imaging 2013: Image Processing, 86692D, Vol. 8669, pp. 86692D-86692D-8. 2013.
Purpose: The UNC-Utah NA-MIC DTI framework represents a coherent, open source, atlas fiber tract based DTI analysis framework that addresses the lack of a standardized fiber tract based DTI analysis workflow in the field. Most steps utilize graphical user interfaces (GUI) to simplify interaction and provide an extensive DTI analysis framework for non-technical researchers/investigators. Data: We illustrate the use of our framework on a 54 directional DWI neuroimaging study contrasting 15 Smokers and 14 Controls. Method(s): At the heart of the framework is a set of tools anchored around the multi-purpose image analysis platform 3D-Slicer. Several workflow steps are handled via external modules called from Slicer in order to provide an integrated approach. Our workflow starts with conversion from DICOM, followed by thorough automatic and interactive quality control (QC), which is a must for a good DTI study. Our framework is centered around a DTI atlas that is either provided as a template or computed directly as an unbiased average atlas from the study data via deformable atlas building. Fiber tracts are defined via interactive tractography and clustering on that atlas. DTI fiber profiles are extracted automatically using the atlas mapping information. These tract parameter profiles are then analyzed using our statistics toolbox (FADTTS). The statistical results are then mapped back on to the fiber bundles and visualized with 3D Slicer. Results: This framework provides a coherent set of tools for DTI quality control and analysis. Conclusions: This framework will provide the field with a uniform process for DTI quality control and analysis.
Geodesic image regression with a sparse parameterization of diffeomorphisms|
J. Fishbaugh, M. Prastawa, G. Gerig, S. Durrleman. In Geometric Science of Information Lecture Notes in Computer Science (LNCS), In Proceedings of the Geometric Science of Information Conference (GSI), Vol. 8085, pp. 95--102. 2013.
Image regression allows for time-discrete imaging data to be modeled continuously, and is a crucial tool for conducting statistical analysis on longitudinal images. Geodesic models are particularly well suited for statistical analysis, as image evolution is fully characterized by a baseline image and initial momenta. However, existing geodesic image regression models are parameterized by a large number of initial momenta, equal to the number of image voxels. In this paper, we present a sparse geodesic image regression framework which greatly reduces the number of model parameters. We combine a control point formulation of deformations with a L1 penalty to select the most relevant subset of momenta. This way, the number of model parameters reflects the complexity of anatomical changes in time rather than the sampling of the image. We apply our method to both synthetic and real data and show that we can decrease the number of model parameters (from the number of voxels down to hundreds) with only minimal decrease in model accuracy. The reduction in model parameters has the potential to improve the power of ensuing statistical analysis, which faces the challenging problem of high dimensionality.