sub header2

SDAV Publications

2017


Rob Latham, Matthieu Dorier, Robert Ross. “Get out of the way! Applying compression to internal data structures,” In Pdsw-discs 2016: 1st joint international workshop on parallel data storage & data intensive scalable computing systems, held in conjunction with SC2016, 2017.

ABSTRACT

As the amount of memory per core decreases in post-petascale machines, the memory footprint of any libraries and middleware used by HPC applications must be reduced. While scientific data can contain a great deal of entropy and require specialized compression techniques, the \em descriptions of scientific data layouts, as opposed to contents, turn out to be highly compressible. In this paper we present two approaches to compressing scientific data layout descriptions. We also describe two data structures for managing the compressed data. We incorporated our approach into the ROMIO MPI-IO implementation to reduce the memory consumption, observing an 89x reduction in memory overhead with a 25% increase in CPU overhead.



W. Widanagamaachchi, A. Jacques, B. Wang, E. Crosman, P.-T. Bremer, V. Pascucci, J Horel. “Exploring the Evolution of Pressure-Perturbations to Understand Atmospheric Phenomena,” In IEEE Pacific Visualization Symposium (PacificVis) , 2017.


2016


Andrew C. Bauer, Hasan Abbasi, James Ahrens, Hank Childs, Berk Geveci, Scott Klasky, Kenneth Moreland, Patrick O'Leary, Venkatram Vishwanath, Brad Whitlock, E. Wes Bethel. “In Situ Methods, Infrastructures, and Applications on High Performance Computing Platforms,” In Computer Graphics Forum, Vol. 32, No. 3, pp. 577--597. June, 2016.
DOI: 10.1111/cgf.12930

ABSTRACT

The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i. e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed/visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU's and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitioners using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.



Kevin Bensema, Luke J. Gosink, Harald Obermaier, Kenneth I. Joy. “Modality-Driven Classification and Visualization of Ensemble Variance,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 22, No. 10, 2016.



Harsh Bhatia, Attila G. Gyulassy, Valerio Pascucci, Martina Bremer, Mitchell T. Ong, Vincenzo Lordi, Erik W. Draeger, John E. Pask, & Peer-Timo Bremer. “Interactive Exploration of Atomic Trajectories Through Relative-Angle Distribution and Associated Uncertainties,” In 2016 IEEE Pacific Visualization Symposium (PacificVis), pp. 120-127. April, 2016.



Biswas, Ayan; Strelitz, Richard; Woodring, Jonathan; Chen, Chun-Ming; Shen, Han-Wei. “A Scalable Streamline Generation Algorithm Via Flux-Based Isocontour Extraction,” In Eurographics Symposium on Parallel Graphics and Visualization (EGPGV16), june, 2016.



Ebru Bozdag Daniel Peter Matthieu Lefebvre Dimitri Komatitsch Jeroen Tromp Judith Hill Norbert Podhorszki David Pugmire. “Global adjoint tomography: first-generation model,” In Geophysical Journal International, Vol. 207, No. 3, Oxford University Press, pp. 1739-1766. Dec, 2016.
DOI: 10.1093/gji/ggw356

ABSTRACT

We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3-D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3-D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named ‘Titan’, a computer with 18 688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3-D mantle model S362ANI with 3-D crustal model Crust2.0. We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used ‘crustal corrections’. We used data from 253 earthquakes in the magnitude range 5.8 ≤ Mw ≤ 7.0. We started inversions by combining ∼30 s body-wave data with ∼60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ∼17 s body waves with ∼45 s surface waves. We started using 180 min long seismograms after the 12th iteration and assimilated minor- and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function tests demonstrate that we are approaching the resolution of continental-scale studies in some areas, for example, underneath Yellowstone. This is a consequence of our multiscale smoothing strategy in which we define our smoothing operator as a function of the approximate Hessian kernel, thereby smoothing gradients less wherever we have good ray coverage, such as underneath North America.



P.-T. Bremer, A. Gruber, J. Bennett, A. Gyulassy, H. Kolla, J. Chen, R.W. Grout. “Identifying turbulent structures through topological segmentation,” In Com. in App. Math. and Comp. Sci., Vol. 11, No. 1, pp. 37-53. 2016.



P.-T. Bremer. “ADAPT - Adaptive Thresholds for Feature Extraction,” In Topology-Based Methods in Visualization, Springer, 2016.



Hamish Carr, Gunther Weber, Christopher Sewell, James Ahrens. “Parallel Peak Pruning for Scalable SMP Contour Tree Computation,” In Proceedings of the IEEE Symposium on Large Data Analysis and Visualization (LDAV), Baltimore, Maryland, Note: Best Paper Award. The results reported in this paper stem from the PISTON / VTK-m work established by SDAV; ‎the specific work for this paper was funded under the ASCR XVIS project., October, 2016.



Chen, Chun-Ming, Dutta, Soumya, liu, Xiaotong, Heinlein, Gregory, Shen, Han-Wei, Chen, Jen-Ping. “Visualization and Analysis of Rotating Stall for Transonic Jet Engine Simulation,” In IEEE SciVis 2015, also in IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 22, no. 1, 2016.



Jong Youl Choi, Tahsin Kurc, Jeremy Logan, Matthew Wolf, Eric Suchyta, James Kress, David Pugmire, Norbert Podhorszki, Eun-Kyu Byun, Mark Ainsworth, Manish Parashar, Scott Klasky. “Stream processing for near real-time scientific data analysis,” In Scientific Data Summit (NYSDS), IEEE Xplore, IEEE, pp. 1-8. August, 2016.
DOI: 10.1109/NYSDS.2016.7747804

ABSTRACT

The demand for near real-time analysis of streaming data is increasing rapidly in scientific projects. This trend is driven by the fact that it is expensive and time consuming to design and execute complex experiments and simulations. During an experiment, the research team and the team at the experiment facility will want to analyze data as it is generated, interpret it, and collaboratively make decisions to modify the experiment parameters or abort the experiment in order to prevent events that may damage experimental instruments or to avoid wasting resources if there is a problem. The increasing velocity and volume of streaming data and the multi-institutional nature of large-scale scientific projects present challenges to near real-time analysis of streaming data. In this work we develop a framework to address these challenges. This framework provides an interface for applications to define and interact with named, self-describing streams, takes advantage of advanced network technologies, and implements support for the reduction and compression of data at the source. We describe this framework and demostrate its application in three scientific applications.



Dharshi Devendran, Suren Byna, Bin Dong, Brian van Straalen, Hans Johansen, Noel Keen, Nagiza Samatova. “Collective I/O Optimizations for Adaptive Mesh Refinement Data Writes on Lustre File System,” In Cray User Group (CUG) , May, 2016.



Bin Dong, Suren Byna, Kesheng Wu. “SDS-Sort: Scalable Dynamic Skew-aware Parallel Sorting,” In The ACM International Symposium on High-Performance Parallel and Distributed Computing (HPDC), July, 2016.



Dutta, Soumya, Shen, Han-Wei. “Distribution Driven Extraction and Tracking of Features for Time-varying Data Analysis,” In IEEE SciVIS 2015, also in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, 2016.



Dianwei Han, Ankit Agrawal, Wei-keng Liao, Alok Choudhary. “A Novel Scalable DBSCAN Algorithm with Spark,” In the 5th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics, held in conjunction with the International Parallel & Distributed Processing Symposium, Chicago, May, 2016.



Chien-Hsin Hsueh, Jacqueline Chu, Kwan-Liu Ma, Joyce Ma, Jennifer Frazier. “Fostering Comparisons: Designing an Interactive Exhibit that Visualizes Marine Animal Behaviors,” In Proceedings of PacificVis 2016 (to appear), 2016.



Qiao Kang, Wei-keng Liao, Ankit Agrawal, Alok Choudhary. “A Filtering-based Clustering Algorithm for Improving Spatio-temporal Kriging Interpolation Accuracy,” In the 25th ACM International Conference on Information and Knowledge Management, Indianapolis, Indiana, October, 2016.



James Kress, Randy Michael Churchill, Scott Klasky, Mark Kim, Hank Childs, David Pugmire. “Preparing for In Situ Processing on Upcoming Leading-edge Supercomputers,” In Supercomputing Frontiers and Innovations, Vol. 3, No. 4, pp. 49-65. 2016.
DOI: 10.14529/jsfi160404

ABSTRACT

High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuing to assist a large scale fusion simulation code succeed on the next generation of supercomputers. These directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.



James Kress David Pugmire Scott Klasky Hank Childs. “Visualization and analysis requirements for in situ processing for a large-scale fusion simulation code,” In ISAV '16 Proceedings of the 2nd Workshop on In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization, pp. 45-50. 2016.
DOI: 10.1109/ISAV.2016.14

ABSTRACT

In situ techniques have become a very active research area since they have been shown to be an effective way to combat the issues associated with the ever growing gap between computation and I/O bandwidth. In order to take full advantage of in situ techniques with a large-scale simulation code, it is critical to understand the breadth and depth of its analysis requirements. In this paper, we present the results of a survey done with members of the XGC1 fusion simulation code team in order to gather their requirements for analysis and visualization. We look at these requirements from the perspective of in situ processing and present a list of XGC1 analysis tasks performed by its physicists, engineers, and visualization specialists. This analysis of the specific needs and use cases of a single code is important in understanding the nature of the needs that simulations have in terms of data movement and usage for visualization and analysis, now and in the future.