Larsen, Matthew, Harrison, Cyrus, Kress, James, Pugmire, David, Meredith, Jeremy S., Childs, Hank.
Performance Modeling of In Situ Rendering, In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC16), Salt Lake City, Utah, pp. 24:1--24:12. Nov, 2016.
Matthew Larsen, Kenneth Moreland, Chris R. Johnson, Hank Childs. Optimizing Multi-Image Sort-Last Parallel Rendering, In Proceedings of IEEE Symposium on Large Data Analysis and Visualization (LDAV), Baltimore, MD pp. 37--46. Oct, 2016.
Sunwoo Lee, Wei-keng Liao, Ankit Agrawal, Nikos Hardavellas, Alok Choudhary. Evaluation of K-Means Data Clustering Algorithm on Intel Xeon Phi, In the Workshop on Advances in Software and Hardware for Big Data to Knowledge Disc overy, held in conjunction with the IEEE Bigdata Conference, Washington, D.C., December, 2016.
Liu, Xiaotong, Shen, Han-Wei. Association Analysis for Visual Exploration of Multivariate Scientific Data Sets, In IEEE SciVis 2015, also in IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 22, no. 1, 2016.
S. Liu, P.-T. Bremer, J. Thiagarajan, B. Wang, B. Summa, V. Pascucci. Grassmannian Atlas: A General Framework for Exploring Linear Projections of High-Dimensional Data. Shusen Liu, In Comput. Graph. Forum, 2016.
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information - inherently lacking in visual encodings - offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to nuclear scientists. Our framework is being deployed into the multipurpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using a simulation dataset studying nuclear fuel performance.
Changsung Moon, Dakota Medd, Paul Jones, Steve Harenberg, William Oxbury, Nagiza F. Samatova. Online Prediction of User Actions through an Ensemble Vote from Vector Representation and Frequency Analysis Models, In SIAM International Conference on Data Mining (SDM), May, 2016.
The history of interactions between a user and a piece of technology can be represented as a sequence of actions. The ability to predict a user's next action is useful to many applications. For example, a user-interface that can anticipate the actions of a user is able to provide a more positive experience through just-in-time recommendations and pro-actively allocating or caching resources. Existing sequence prediction techniques have failed to address some of the challenges associated with this task, such as predicting an action that has never appeared for a given context. Techniques for an analogous task in the field of Natural Language Processing (NLP) avoid this issue; however, applying these NLP techniques directly to user action prediction would result in the loss of action frequency and action order, both of which are critically important. Therefore, we propose a method that unifies ideas from NLP with the task of sequence prediction. Our method, Frequency Vector (FVEC) prediction, is an online algorithm that predicts the top-N most likely next actions by combining scores from two models: a frequency analysis model and a vector representation model. In the frequency model, the score of an action is calculated based on the frequency that the action has occurred right after a given context. In the vector representation model, a vector for each action is learned, and a score for an action is calculated based on the similarity of its vector and the mean of the vectors for each action in a given context. Evaluations of FVEC on three real-world datasets resulted in a consistently higher prediction accuracy (and lower standard deviation) than all tested sequence prediction algorithms.
The Tensions of In Situ Visualization, In IEEE Computer Graphics and Applications, Vol. 36, No. 2, pp. 5-9. March/April, 2016.
In situ visualization is the coupling of visualization software with a simulation or other data producer to process the data "in memory" before the data are offloaded to a storage system. Although in situ visualization provides superior analysis, it has implementation tradeoffs resulting from conflicts with some traditional expected requirements. Numerous conflicting requirements create tensions that lead to difficult implementation tradeoffs. This article takes a look at the most prevailing tensions of in situ visualization.
Kenneth Moreland, Christopher Sewell, William Usher, Li-ta Lo, Jeremy Meredith, David Pugmire, James Kress, Hendrik Schroots, Kwan-Liu Ma, Hank Childs, Matthew Larsen, Chun-Ming Chen, Robert Maynard, Berk Geveci.
VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures, In IEEE Computer Graphics and Applications, Vol. 36, No. 3, pp. 48--58. May/June, 2016.
One of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
Why We Use Bad Color Maps and What You Can Do About It, In Proceedings of Human Vision and Electronic Imaging (HVEI), February, 2016.
We know the rainbow color map is terrible, and it is emphatically reviled by the visualization community, yet its use continues to persist. Why do we continue to use a this perceptual encoding with so many known flaws? Instead of focusing on why we should not use rainbow colors, this position statement explores the rational for why we do pick these colors despite their flaws. Often the decision is influenced by a lack of knowledge, but even experts that know better sometimes choose poorly. A larger issue is the expedience that we have inadvertently made the rainbow color map become. Knowing why the rainbow color map is used will help us move away from it. Education is good, but clearly not sufficient. We gain traction by making sensible color alternatives more convenient. It is not feasible to force a color map on users. Our goal is to supplant the rainbow color map as a common standard, and we will find that even those wedded to it will migrate away.
Chris Muelder, Biao Zhu, Wei Chen, Hongxin Zhang, Kwan-Liu Ma. Visual Analysis of Cloud Computing Performance Using Behavioral Lines, In Proceedings of PacificVis 2016 (to appear), 2016.
Tyson Neuroth, Franz Sauer, Weixing Wang, Stephane Ethier, Choong-Seock Chang,, Kwan-Liu Ma. Scalable Visualization of Time-varying Multi-parameter Distributions Using Spatially Organized Histograms, In IEEE Transactions on Visualization and Computer Graphics, Vol. PP, No. 99, 2016.
Harald Obermaier, Kevin Bensema, Kenneth I. Joy. Visual Trends Analysis in Time-Varying Ensembles, In IEEE Transactions on Visualization and Computer Graphics, Vol. 22, No. 10, 2016.
Diana Palsetia, William Hendrix, Sunwoo Lee, Ankit Agrawal, Wei-keng Liao, Alok Choudhary. Parallel Community Detection Algorithm Using a Data Partitioning Strategy with Pairwise Subdomain Duplication, In the 31st International Supercomputing Conference, Frankfurt, Germany, June, 2016.
Annie Preston, Ramyar Ghods, Jinrong Xie, Franz Sauer, Nick Leaf, Kwan-Liu Ma, Esteban Rangel, Eve Kovacs, Katrin Heitmann, Salman Habib. An Integrated Visualization System for Interactive Analysis of Large, Heterogeneous Cosmology Data, In Proceedings of PacificVis 2016 (to appear), 2016.
David Pugmire; James Kress; Hank Childs; Matthew Wolf; Greg Eisenhauer; Randy Churchill; Tahsin Kurc; Jong Choi; Scott Klasky; Kesheng Wu; Alex Sim; Junmin Gu. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows, In High Performance Data Analysis and Visualization (HPDAV) 2016 held in conjuction with IPDPS 2016, Chicago, May, 2016.
Roberto Sisneros, David Pugmire. Tuned to Terrible: A Study of Parallel Particle Advection State of the Practice., In High Performance Data Analysis and Visualization (HPDAV) 2016 held in conjuction with IPDPS 2016, Chicago, May, 2016.
Dave Pugmire, Jeremy Meredith, Scott Klasky, Jong Choi, Norbert Podhorszki, James Kress, Hank Childs. Visualization Plugins using VTKm for In-Transit Visualization with ADIOS, In Supercomputing Frontiers 2016, Singapore, March, 2016.
Stephen Ranshous, Steve Harenberg, Kshitij Sharma, Nagiza F. Samatova. A Scalable Approach for Outlier Detection in Edge Streams Using Sketch-based Approximations, In SIAM International Conference on Data Mining (SDM), May, 2016.