Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.
  • PIs: Chris Johnson, Valerio Pascucci
  • Advanced graphics and visualization capabilities made available to the rendering, scientific visualization and virtual design communities

Past Intel Centers

Intel PCC: Modernizing Scientific Visualization and Computation on Many-core Architectures

  • PIs: Valerio Pascucci, Martin Berzins
  • Applying OSPRay to visualization and HPC production in practice (i.e., Uintah, VisIt)
  • Visualization analysis research: IO, topology, multifield/multidimensional (ViSUS)
  • Staging Intel resources for both the Vis Center and IPCC.
  • Preparing for exascale on DOE A21 via early science program
  • Optimizing next-gen Navy weather codes for Intel KNL and beyond

Intel Visualization Center

  • PIs: Chris Johnson, Ingo Wald (Intel)
  • Large-scale vis and HPC technology on CPU/Phi hardware (OSPRay)

Current and Past Projects

CPU Ray Tracing of Tree-Based Adaptive Mesh Refinement Data

  •  A novel reconstruction strategy for cell-centered AMR data, GTI,that enables artifact-free volume and isosurface rendering;
  • An efficient high-fidelity visualization solution for both AMR and other multiresolution grid datasets on multicore CPUs, supporting empty space skipping, adaptive sampling and isosurfacing;
  • Integration of our method into the OSPRay ray tracing library and open-source release to make it widely available to domain scientists
  • The Paper is published in EuroVIS 2020 (



Hybrid Isosurface Ray Tracing of Block Strictured (BS) AMR Data in OSPRay

  • Proposed a novel BS-AMR reconstruction strategy—the octant method— applicable to both isosurface and direct volume rendering, that is locally rectilinear, adaptive, and continuous, even across level boundaries.
  • Provided an efficient hybrid implicit isosurface ray-tracing approach that combines ideas from isosurface extraction and implicit isosurface ray-tracing, applicable to both non-AMR data and (using our octant method) BS-AMR data.
  • Extended OSPRay to support interactive high-fidelity rendering of BS-AMR data
  • The Paper is published and was presented at IEEE VIS 2018 (

AMR teaser

In Situ Visualization of Molecular Dynamics Simulations

senseiSENSEI: LBL, ANL, Georgia Tech, VisIt, Kitware collaboration to enable portable in situ analysis
  • Now released on Github, presented at ISAV18
  • In Transit coupling via libIS, informing design of SENSEI's in transit API
  • Working on merging libIS adaptor and SENSEI execution application into SENSEI
  • OSPRay in transit viewer using MPI-Distributed Device

Preparing for Exascale Projects (i)

  • Collaborating with GE through DOE/NNSA PSAAP II Initiative to support design/evaluation of existing ultra-supercritical clean coal boiler
  • Adopted Kokkos within Uintah to prepare for large-scale combustion simulations for A21 Exascale Early Science Program, using Kokkos instead of OpenMP direct to improve portability to future architectures
  • Developed a portability layer and hybrid parallelism approach within Uintah to interface to Kokkos::OpenMP and Kokkos::CUDA
  • Demonstrated good strong-scaling with MPI+Kokkos to: 442,368 threads across 1,728 KNLs on TACC Stampede 2, as well as 131,072 threads across 512 KNLs on ALCF Theta, and 64 GPUs on ORNL Titan

Performance Portability Across Diverse Systems

  • Extended Uintah's MPI+Kokkos infrastructure to improve node use
  • Working to try to identify loop characteristics that scale well with Kokkos on Intel architectures
  • Preparing for large-scale runs with MPI+Kokkos on NERSC Cori, TACC Frontera (pending access), and LLNL Lassen

Optimizing Numerical Weather Prediction Codes

  • Performance Optimization of physics routines
  • Positivity-preserving mapping in physics-dynamics coupling

OSPRay Distributed FrameBuffer

  • Have made significant improvements to OSPRay's distributed rendering scalability on Xeon and Xeon Phi
  • Combination of optimizations to communication pattern, code, data transferred and data compression
  • Provide better scalability with compute or memory capacity to end users, releasing in OSPRay 1.8
4096x1024 image of 1TB Volume + 4.35B transparent triangles Rendered at 5FPS on 64 Stampede2 KNL nodes

OSPRay MPI-Distributed Device

  • Enable MPI-parallel applications to use OSPRay for scalable rendering with the Distributed FrameBuffer
  • Releasing soon in OSPRay 1.8
  • Imposes no requirements on data types/geometry, supports existing OSPRay extension modules
  • Enables direct OSPRay integration into parallel applications like ParaView, VisIt, VL3 (ANL), or in situ via ParaView Catalyst, VisIt LibSim

Ray Tracing Generalized Stream Lines in OSPRay

  • High performance, high fidelity technique for interactively ray tracing generalized stream line primitives.
  • High-performance technique capable of: lines with same radius and varying radii, bifurcations, such as neuron morphology, and correct transparency
dti torus
Semi-transparent visualization of DTI data (218k links) Flow past a torus (fixed radius pathlines ~6.5 M links)

Bricktree for Large-scale Volumetric Data Visualization

  • Interactive visualization solution for large-scale volumes in OSPRay
  • Quickly loads progressively higher resolutions of data, reducing user wait times
  • Bricktree – a low-overhead hierarchical structure allows for encoding a large volume into multi-resolution representation
  • Rendered via OSPRay module


Display Wall Rendering with OSPRay

  • Software infrastructure that allows parallel renderers (OSPRay) to render to large-tiled display clusters.
  • Decouples the rendering cluster and display cluster, providing a "service" that treats the display wall as a single virtual screen, and a client-side library that allows an MPI-parallel renderer to connect and send pixel data to the display wall
  • Exploring lightweight, inexpensive and easy to deploy options via Intel NUC + remote rendering cluster

display wall