Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2018


M.J.M. Cluitmans, S. Ghimire, J. Dhamala, J. Coll-Font, J.D. Tate, S. Giffard-Roisin, J. Svehlikova, O. Doessel, M.S. Guillem, D.H. Brooks, R.S. Macleod, L. Wang. “P1125 Noninvasive localization of premature ventricular complexes: a research-community-based approach,” In EP Europace, Vol. 20, No. Supplement, Oxford University Press, March, 2018.
DOI: 10.1093/europace/euy015.611

ABSTRACT

Background: Noninvasive localization of premature ventricular complexes (PVCs) to guide ablation therapy is one of the emerging applications of electrocardiographic imaging (ECGI). Because of its increasing clinical use, it is essential to compare the many implementations of ECGI that exist to understand the specific characteristics of each approach.

Objective: Our consortium is a community of researchers aiming to collaborate in the field of ECGI, and to objectively compare and improve methods. Here, we will compare methods to localize the origin of PVCs with ECGI.

Methods: Our consortium hosts a repository of ECGI data on its website. For the current study, participants analysed simulated electrocardiograms from premature beats, freely available on that website. These PVCs were simulated to originate from eight ventricular locations and the resulting body-surface potentials were computed. These body-surface electrocardiograms (and the torso-heart geometry) were then provided to the study participants to apply their ECGI algorithms to determine the origin of the PVCs. Participants could choose freely among four different source models, i.e., representations of the bioelectric fields reconstructed from ECGI: 1) epicardial potentials (POTepi), 2) epicardial & endocardial potentials (POTepi&endo), 3) transmembrane potentials on the endocardium and epicardium (TMPepi&endo) and 4) transmembrame potentials throughout the myocardium (TMPmyo). Participants were free to employ any software implementation of ECGI and were blinded to the ground truth data.

Results: Four research groups submitted 11 entries for this study. The figure shows the localization error between the known and reconstructed origin of each PVC for each submission, categorized per source model. Each colour represents one research group and some groups submitted results using different approaches. These results demonstrate that the variation of accuracy was larger among research groups than among the source models. Most submissions achieved an error below 2 cm, but none performed with a consistent sub-centimetre accuracy.

Conclusion: This study demonstrates a successful community-based approach to study different ECGI methods for PVC localization. The goal was not to rank research groups but to compare both source models and numerical implementations. PVC localization with these methods was not as dependent on the source representation as it was on the implementation of ECGI. Consequently, ECGI validation should not be performed on generic methods, but should be specifically performed for each lab's implementation. The novelty of this study is that it achieves this in the first open, international comparison of approaches using a common set of gold standards. Continued collaborative validation is essential to understand the effect of implementation differences, in order to reach significant improvements and arrive at clinically-relevant sub-centimetre accuracy of PVC localization.



M. Cluitmans, D. H. Brooks, R. MacLeod, O. Dössel, M. S. Guillem, P. M. van Dam, J. Svehlikova, B. He, J. Sapp, L. Wang, L. Bear. “Validation and Opportunities of Electrocardiographic Imaging: From Technical Achievements to Clinical Applications,” In Frontiers in Physiology, Vol. 9, Frontiers Media SA, pp. 1305. 2018.
ISSN: 1664-042X
DOI: 10.3389/fphys.2018.01305

ABSTRACT

Electrocardiographic imaging (ECGI) reconstructs the electrical activity of the heart from a dense array of body-surface electrocardiograms and a patient-specific heart-torso geometry. Depending on how it is formulated, ECGI allows the reconstruction of the activation and recovery sequence of the heart, the origin of premature beats or tachycardia, the anchors/hotspots of re-entrant arrhythmias and other electrophysiological quantities of interest. Importantly, these quantities are directly and noninvasively reconstructed in a digitized model of the patient’s three-dimensional heart, which has led to clinical interest in ECGI’s ability to personalize diagnosis and guide therapy.
Despite considerable development over the last decades, validation of ECGI is challenging. Firstly, results depend considerably on implementation choices, which are necessary to deal with ECGI’s ill-posed character. Secondly, it is challenging to obtain (invasive) ground truth data of high quality. In this  review, we discuss the current status of ECGI validation as well as the major challenges remaining for complete adoption of ECGI in clinical practice.

Specifically, showing clinical benefit is essential for the adoption of ECGI. Such benefit may lie in patient outcome improvement, workflow improvement, or cost reduction. Future studies should focus on these aspects to achieve broad adoption of ECGI, but only after the technical challenges have been solved for that specific application/pathology. We propose ‘best’ practices for technical validation and highlight collaborative efforts recently organized in this field. Continued interaction between engineers, basic scientists and physicians remains essential to find a hybrid between technical achievements, pathological mechanisms insights, and clinical benefit, to evolve this powerful technique towards a useful role in clinical practice.



A. Erdemir, P.J. Hunter, G.A. Holzapfel, L.M. Loew, J. Middleton, C.R. Jacobs, P. Nithiarasu, R. Löhner, G. Wei, B.A. Winkelstein, V.H. Barocas, F. Guilak, J.P. Ku, J.L. Hicks, S.L. Delp, M.S. Sacks, J.A. Weiss, G.A. Ateshian, S.A. Maas, A.D. McCulloch, G.C.Y. Peng. “Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research,” In Journal of Biomechanical Engineering, Vol. 140, No. 2, ASME International, pp. 024701. Jan, 2018.
DOI: 10.1115/1.4038768

ABSTRACT

The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate model sharing, and there are corresponding initiatives by the scientific journals. Outside the publishing enterprise, infrastructure to facilitate model sharing in biomechanics exists, and simulation software developers are interested in accommodating the community's needs for sharing of modeling resources. Encouragement for the use of standardized markups, concerns related to quality assurance, acknowledgement of increased burden, and importance of stewardship of resources are noted. In the short-term, it is advisable that the community builds upon recent strategies and experiments with new pathways for continued demonstration of model sharing, its promotion, and its utility. Nonetheless, the need for a long-term strategy to unify approaches in sharing computational models and related resources is acknowledged. Development of a sustainable platform supported by a culture of open model sharing will likely evolve through continued and inclusive discussions bringing all stakeholders at the table, e.g., by possibly establishing a consortium.



M.D. Foote, B. Zimmerman, A. Sawant, S. Joshi. “Real-Time Patient-Specific Lung Radiotherapy Targeting using Deep Learning,” In 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 2018.

ABSTRACT

Radiation therapy has presented a need for dynamic tracking of a target tumor volume. Fiducial markers such as implanted gold seeds have been used to gate radiation delivery but the markers are invasive and gating significantly increases treatment time. Pretreatment acquisition of a 4DCT allows for the development of accurate motion estimation for treatment planning. A deep convolutional neural network and subspace motion tracking is used to recover anatomical positions from a single radiograph projection in real-time. We approximate the nonlinear inverse of a diffeomorphic transformation composed with radiographic projection as a deep network that produces subspace coordinates to define the patient-specific deformation of the lungs from a baseline anatomic position. The geometric accuracy of the subspace projections on real patient data is similar to accuracy attained by original image registration between individual respiratory-phase image volumes.



S. Guler, M. Dannhauer, B. Roig-Solvas, A. Gkogkidis, R. Macleod, T. Ball, J. G. Ojemann, D. H. Brooks. “Computationally optimized ECoG stimulation with local safety constraints,” In NeuroImage, Vol. 173, Elsevier BV, pp. 35--48. June, 2018.
DOI: 10.1016/j.neuroimage.2018.01.088

ABSTRACT

Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice.



L. Guo, A. Narayan, T. Zhou. “A gradient enhanced ℓ1-minimization for sparse approximation of polynomial chaos expansions,” In Journal of Computational Physics, Vol. 367, Elsevier BV, pp. 49--64. Aug, 2018.

ABSTRACT

We investigate a gradient enhanced ℓ 1-minimization for constructing sparse polynomial chaos expansions. In addition to function evaluations, measurements of the function gradient is also included to accelerate the identification of expansion coefficients. By designing appropriate preconditioners to the measurement matrix, we show gradient enhanced ℓ 1 minimization leads to stable and accurate coefficient recovery. The framework for designing preconditioners is quite general and it applies to recover of functions whose domain is bounded or unbounded. Comparisons between the gradient enhanced approach and the standard ℓ 1-minimization are also presented and numerical examples suggest that the inclusion of derivative information can guarantee sparse recovery at a reduced computational cost.



L. Guo, A. Narayan, L. Yan, T. Zhou. “Weighted approximate fekete points: sampling for least-squares polynomial approximation,” In SIAM Journal on Scientific Computing, Vol. 40, No. 1, SIAM, pp. A366--A387. Jan, 2018.
DOI: 10.1137/17m1140960

ABSTRACT

We propose and analyze a weighted greedy scheme for computing deterministic sample configurations in multidimensional space for performing least-squares polynomial approximations on $L^2$ spaces weighted by a probability density function. Our procedure is a particular weighted version of the approximate Fekete points method, with the weight function chosen as the (inverse) Christoffel function. Our procedure has theoretical advantages: when linear systems with optimal condition number exist, the procedure finds them. In the one-dimensional setting with any density function, our greedy procedure almost always generates optimally conditioned linear systems. Our method also has practical advantages: our procedure is impartial to the compactness of the domain of approximation and uses only pivoted linear algebraic routines. We show through numerous examples that our sampling design outperforms competing randomized and deterministic designs when the domain is both low and high dimensional.



M. Hajij, B. Wang, C. Scheidegger, P. Rosen. “Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology,” In 2018 IEEE Pacific Visualization Symposium (PacificVis), IEEE, April, 2018.
DOI: 10.1109/pacificvis.2018.00024

ABSTRACT

Topological data analysis is an emerging area in exploratory data analysis and data mining. Its main tool, persistent homology, has become a popular technique to study the structure of complex, high-dimensional data. In this paper, we propose a novel method using persistent homology to quantify structural changes in time-varying graphs. Specifically, we transform each instance of the time-varying graph into a metric space, extract topological features using persistent homology, and compare those features over time. We provide a visualization that assists in time-varying graph exploration and helps to identify patterns of behavior within the data. To validate our approach, we conduct several case studies on real-world datasets and show how our method can find cyclic patterns, deviations from those patterns, and one-time events in time-varying graphs. We also examine whether a persistence-based similarity measure satisfies a set of well-established, desirable properties for graph metrics.



M. Hajij, B. Wang, P. Rosen. “MOG: Mapper on Graphs for Relationship Preserving Clustering,” In CoRR, 2018.

ABSTRACT

The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various complexities



J. Hampton, HR. Fairbanks, A. Narayan, A. Doostan. “Practical error bounds for a non-intrusive bi-fidelity approach to parametric/stochastic model reduction,” In Journal of Computational Physics, Vol. 368, Elsevier BV, pp. 315--332. September, 2018.
DOI: 10.1016/j.jcp.2018.04.015

ABSTRACT

For practical model-based demands, such as design space exploration and uncertainty quantification (UQ), a high-fidelity model that produces accurate outputs often has high computational cost, while a low-fidelity model with less accurate outputs has low computational cost. It is often possible to construct a bi-fidelity model having accuracy comparable with the high-fidelity model and computational cost comparable with the low-fidelity model. This work presents the construction and analysis of a non-intrusive (i.e., sample-based) bi-fidelity model that relies on the low-rank structure of the map between model parameters/uncertain inputs and the solution of interest, if exists. Specifically, we derive a novel, pragmatic estimate for the error committed by this bi-fidelity model. We show that this error bound can be used to determine if a given pair of low- and high-fidelity models will lead to an accurate bi-fidelity approximation. The cost of this error bound is relatively small and depends on the solution rank. The value of this error estimate is demonstrated using two example problems in the context of UQ, involving linear and non-linear partial differential equations.



Y. He, M. Razi, C. Forestiere, L. Dal Negro, R.M. Kirby. “Uncertainty quantification guided robust design for nanoparticles' morphology,” In Computer Methods in Applied Mechanics and Engineering, Elsevier BV, pp. 578--593. July, 2018.
DOI: 10.1016/j.cma.2018.03.027

ABSTRACT

The automatic inverse design of three-dimensional plasmonic nanoparticles enables scientists and engineers to explore a wide design space and to maximize a device's performance. However, due to the large uncertainty in the nanofabrication process, we may not be able to obtain a deterministic value of the objective, and the objective may vary dramatically with respect to a small variation in uncertain parameters. Therefore, we take into account the uncertainty in simulations and adopt a classical robust design model for a robust design. In addition, we propose an efficient numerical procedure for the robust design to reduce the computational cost of the process caused by the consideration of the uncertainty. Specifically, we use a global sensitivity analysis method to identify the important random variables and consider the non-important ones as deterministic, and consequently reduce the dimension of the stochastic space. In addition, we apply the generalized polynomial chaos expansion method for constructing computationally cheaper surrogate models to approximate and replace the full simulations. This efficient robust design procedure is performed by varying the particles' material among the most commonly used plasmonic materials such as gold, silver, and aluminum, to obtain different robust optimal shapes for the best enhancement of electric fields.



A. Jallepalli, J. Docampo-Sánchez, J.K. Ryan, R. Haimes, R.M. Kirby. “On the treatment of field quantities and elemental continuity in fem solutions,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 24, No. 1, IEEE, pp. 903--912. Jan, 2018.
DOI: 10.1109/tvcg.2017.2744058

ABSTRACT

As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only C0 continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.



A. Janson, C. Butson. “Targeting Neuronal Fiber Tracts for Deep Brain Stimulation Therapy Using Interactive, Patient-Specific Models,” In Journal of Visualized Experiments, No. 138, MyJove Corporation, Aug, 2018.
DOI: 10.3791/57292

ABSTRACT

Deep brain stimulation (DBS), which involves insertion of an electrode to deliver stimulation to a localized brain region, is an established therapy for movement disorders and is being applied to a growing number of disorders. Computational modeling has been successfully used to predict the clinical effects of DBS; however, there is a need for novel modeling techniques to keep pace with the growing complexity of DBS devices. These models also need to generate predictions quickly and accurately. The goal of this project is to develop an image processing pipeline to incorporate structural magnetic resonance imaging (MRI) and diffusion weighted imaging (DWI) into an interactive, patient specific model to simulate the effects of DBS. A virtual DBS lead can be placed inside of the patient model, along with active contacts and stimulation settings, where changes in lead position or orientation generate a new finite element mesh and solution of the bioelectric field problem in near real-time, a timespan of approximately 10 seconds. This system also enables the simulation of multiple leads in close proximity to allow for current steering by varying anodes and cathodes on different leads. The techniques presented in this paper reduce the burden of generating and using computational models while providing meaningful feedback about the effects of electrode position, electrode design, and stimulation configurations to researchers or clinicians who may not be modeling experts.



M. Javanmardi, T. Tasdizen. “Domain adaptation for biomedical image segmentation using adversarial training,” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp. 554-558. April, 2018.
DOI: 10.1109/isbi.2018.8363637

ABSTRACT

Many biomedical image analysis applications require segmentation. Convolutional neural networks (CNN) have become a promising approach to segment biomedical images; however, the accuracy of these methods is highly dependent on the training data. We focus on biomedical image segmentation in the context where there is variation between source and target datasets and ground truth for the target dataset is very limited or non-existent. We use an adversarial based training approach to train CNNs to achieve good accuracy on the target domain. We use the DRIVE and STARE eye vasculture segmentation datasets and show that our approach can significantly improve results where we only use labels of one domain in training and test on the other domain. We also show improvements on membrane detection between MIC-CAI 2016 CREMI challenge and ISBI2013 EM segmentation challenge datasets.



A.L. Kapron, S.K. Aoki, J.A. Weiss, A.J. Krych, T.G. Maak. “Isolated focal cartilage and labral defects in patients with femoroacetabular impingement syndrome may represent new, unique injury patterns,” In Knee Surgery, Sports Traumatology, Arthroscopy, Springer Nature, Feb, 2018.
DOI: 10.1007/s00167-018-4861-2

ABSTRACT

Purpose

Develop a framework to quantify the size, location and severity of femoral and acetabular-sided cartilage and labral damage observed in patients undergoing hip arthroscopy, and generate a database of individual defect parameters to facilitate future research and treatment efforts.

Methods

The size, location, and severity of cartilage and labral damage were prospectively collected using a custom, standardized post-operative template for 100 consecutive patients with femoroacetabular impingement syndrome. Chondrolabral junction damage, isolated intrasubstance labral damage, isolated acetabular cartilage damage and femoral cartilage damage were quantified and recorded using a combination of Beck and ICRS criteria. Radiographic measurements including alpha angle, head–neck offset, lateral centre edge angle and acetabular index were calculated and compared to the aforementioned chondral data using a multivariable logistic regression model and adjusted odd's ratio. Reliability among measurements were assessed using the kappa statistic and intraclass coefficients were used to evaluate continuous variables.

Results

Damage to the acetabular cartilage originating at the chondrolabral junction was the most common finding in 97 hips (97%) and was usually accompanied by labral damage in 65 hips (65%). The width (p = 0.003) and clock-face length (p = 0.016) of the damaged region both increased alpha angle on anteroposterior films. 10% of hips had femoral cartilage damage while only 2 (2%) of hips had isolated defects to either the acetabular cartilage or labrum. The adjusted odds of severe cartilage (p = 0.022) and labral damage (p = 0.046) increased with radiographic cam deformity but was not related to radiographic measures of acetabular coverage.

Conclusions

Damage at the chondrolabral junction was very common in this hip arthroscopy cohort, while isolated defects to the acetabular cartilage or labrum were rare. These data demonstrate that the severity of cam morphology, quantified through radiographic measurements, is a primary predictor of location and severity of chondral and labral damage and focal chondral defects may represent a unique subset of patients that deserve further study.



V. Keshavarzzadeh, R.M. Kirby, A. Narayan. “Numerical integration in multiple dimensions with designed quadrature,” In CoRR, 2018.

ABSTRACT

We present a systematic computational framework for generating positive quadrature rules in multiple dimensions on general geometries. A direct moment-matching formulation that enforces exact integration on polynomial subspaces yields nonlinear conditions and geometric constraints on nodes and weights. We use penalty methods to address the geometric constraints, and subsequently solve a quadratic minimization problem via the Gauss-Newton method. Our analysis provides guidance on requisite sizes of quadrature rules for a given polynomial subspace, and furnishes useful user-end stability bounds on error in the quadrature rule in the case when the polynomial moment conditions are violated by a small amount due to, e.g., finite precision limitations or stagnation of the optimization procedure. We present several numerical examples investigating optimal low-degree quadrature rules, Lebesgue constants, and 100-dimensional quadrature. Our capstone examples compare our quadrature approach to popular alternatives, such as sparse grids and quasi-Monte Carlo methods, for problems in linear elasticity and topology optimization.



K Knudson, B Wang. “Discrete Stratified Morse Theory: A User's Guide,” In CoRR, 2018.

ABSTRACT

Inspired by the works of Forman on discrete Morse theory, which is a combinatorial adaptation to cell complexes of classical Morse theory on manifolds, we introduce a discrete analogue of the stratified Morse theory of Goresky and MacPherson (1988). We describe the basics of this theory and prove fundamental theorems relating the topology of a general simplicial complex with the critical simplices of a discrete stratified Morse function on the complex. We also provide an algorithm that constructs a discrete stratified Morse function out of an arbitrary function defined on a finite simplicial complex; this is different from simply constructing a discrete Morse function on such a complex. We borrow Forman's idea of a "user's guide," where we give simple examples to convey the utility of our theory.



L. Kuhnel, T. Fletcher, S. Joshi, S. Sommer. “Latent Space Non-Linear Statistics,” In CoRR, 2018.

ABSTRACT

Given data, deep generative models, such as variational autoencoders (VAE) and generative adversarial networks (GAN), train a lower dimensional latent representation of the data space. The linear Euclidean geometry of data space pulls back to a nonlinear Riemannian geometry on the latent space. The latent space thus provides a low-dimensional nonlinear representation of data and classical linear statistical techniques are no longer applicable. In this paper we show how statistics of data in their latent space representation can be performed using techniques from the field of nonlinear manifold statistics. Nonlinear manifold statistics provide generalizations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood fits of parametric probability distributions. We develop new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.



S. Kumar, A. Humphrey, W. Usher, S. Petruzza, B. Peterson, J. A. Schmidt, D. Harris, B. Isaac, J. Thornock, T. Harman, V. Pascucci,, M. Berzins. “Scalable Data Management of the Uintah Simulation Framework for Next-Generation Engineering Problems with Radiation,” In Supercomputing Frontiers, Springer International Publishing, pp. 219--240. 2018.
ISBN: 978-3-319-69953-0
DOI: 10.1007/978-3-319-69953-0_13

ABSTRACT

The need to scale next-generation industrial engineering problems to the largest computational platforms presents unique challenges. This paper focuses on data management related problems faced by the Uintah simulation framework at a production scale of 260K processes. Uintah provides a highly scalable asynchronous many-task runtime system, which in this work is used for the modeling of a 1000 megawatt electric (MWe) ultra-supercritical (USC) coal boiler. At 260K processes, we faced both parallel I/O and visualization related challenges, e.g., the default file-per-process I/O approach of Uintah did not scale on Mira. In this paper we present a simple to implement, restructuring based parallel I/O technique. We impose a restructuring step that alters the distribution of data among processes. The goal is to distribute the dataset such that each process holds a larger chunk of data, which is then written to a file independently. This approach finds a middle ground between two of the most common parallel I/O schemes--file per process I/O and shared file I/O--in terms of both the total number of generated files, and the extent of communication involved during the data aggregation phase. To address scalability issues when visualizing the simulation data, we developed a lightweight renderer using OSPRay, which allows scientists to visualize the data interactively at high quality and make production movies. Finally, this work presents a highly efficient and scalable radiation model based on the sweeping method, which significantly outperforms previous approaches in Uintah, like discrete ordinates. The integrated approach allowed the USC boiler problem to run on 260K CPU cores on Mira.



B. Kundu, A. A. Brock, D. J. Englot, C. R. Butson, J. D. Rolston. “Deep brain stimulation for the treatment of disorders of consciousness and cognition in traumatic brain injury patients: a review,” In Neurosurgical Focus, Vol. 45, No. 2, Journal of Neurosurgery Publishing Group (JNSPG), pp. E14. Aug, 2018.
DOI: 10.3171/2018.5.focus18168

ABSTRACT

Traumatic brain injury (TBI) is a looming epidemic, growing most rapidly in the elderly population. Some of the most devastating sequelae of TBI are related to depressed levels of consciousness (e.g., coma, minimally conscious state) or deficits in executive function. To date, pharmacological and rehabilitative therapies to treat these sequelae are limited. Deep brain stimulation (DBS) has been used to treat a number of pathologies, including Parkinson disease, essential tremor, and epilepsy. Animal and clinical research shows that targets addressing depressed levels of consciousness include components of the ascending reticular activating system and areas of the thalamus. Targets for improving executive function are more varied and include areas that modulate attention and memory, such as the frontal and prefrontal cortex, fornix, nucleus accumbens, internal capsule, thalamus, and some brainstem nuclei. The authors review the literature addressing the use of DBS to treat higher-order cognitive dysfunction and disorders of consciousness in TBI patients, while also offering suggestions on directions for future research.