2017 IEEE Pacific Visualization Symposium (PacificVis) 2017
DOI: 10.1109/pacificvis.2017.8031585
|View full text |Cite
|
Sign up to set email alerts
|

Homogeneity guided probabilistic data summaries for analysis and visualization of large-scale data sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 34 publications
0
17
0
Order By: Relevance
“…This technique is called regular sampling. Regular sampling does not consider any data properties while selecting samples and due to the regular nature of sample selection, it produces artifacts and discontinuities during sample-based visual analysis [ 17 , 22 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This technique is called regular sampling. Regular sampling does not consider any data properties while selecting samples and due to the regular nature of sample selection, it produces artifacts and discontinuities during sample-based visual analysis [ 17 , 22 ].…”
Section: Methodsmentioning
confidence: 99%
“…As a result, a primary requirement of such summarization process will be to preserve the salient regions (i.e., regions where multiple variables demonstrate specific trends) of the data with high fidelity such that query-driven multivariate analysis and visualization can be done efficiently using the reduced subset of the data. Scientists in the past have proposed several data reduction approaches such as distribution-based [ 6 , 10 , 15 , 16 , 17 ], image-based [ 3 , 5 ], wavelet compression based [ 18 , 19 , 20 ], data compression by functional approximations [ 21 ] etc. However, a majority of these techniques perform data reduction on each variable individually and overlook the relationship among variables.…”
Section: Introductionmentioning
confidence: 99%
“…In situ analysis algorithms may transform data into reduced representations or surrogate models in order to mitigate large data size, high dimensionality, or long computation times. Low-rank approximation (Austin et al, 2016), statistical summarization (Biswas et al, 2018;Dutta et al, 2017;Hazarika et al, 2018;Lawrence et al, 2017;Lohrmann et al, 2017;Thompson et al, 2011), topological segmentation (Gyulassy et al, 2012(Gyulassy et al, , 2019Landge et al, 2014;Weber, 2013, 2014), wavelet transformation (Li et al, 2017;Salloum et al, 2018), lossy compression (Brislawn et al, 2012;Di and Cappello, 2016;Lindstrom, 2014), geometric modeling (Nashed et al, 2019; Peterka et al, 2018), and feature detection (Guo et al, 2017) may be used to generate reduced or surrogate models.…”
Section: Analysis Algorithmsmentioning
confidence: 99%
“…Research is required to modify existing post hoc algorithms and develop new in situ algorithms to satisfy the needs of modern use cases on emerging system architectures that can feature massive scale, many cores, deep memory hierarchies, or embedded lightweight edge devices. Examples of such algorithms include reduced representations and low-rank approximations (Austin et al, 2016), statistical (Biswas et al, 2018;Dutta et al, 2017;Hazarika et al, 2018;Thompson et al, 2011), topological (Gyulassy et al, 2012(Gyulassy et al, , 2019Landge et al, 2014;Weber, 2013, 2014), wavelets (Li et al, 2017;Salloum et al, 2018), compression (Brislawn et al, 2012;Di and Cappello, 2016;Lindstrom, 2014), and feature detection (Guo et al, 2017) methods. Surrogate models and multifidelity models can be geometric (Nashed et al, 2019;Peterka et al, 2018), statistical (Lawrence et al, 2017;Lohrmann et al, 2017), or neural network (He et al, 2019).…”
Section: In Situ Algorithmsmentioning
confidence: 99%
“…To deal with large amounts of data, recent approaches employ probabilistic data summaries [7,8,12,45] to represent blocks of data as probability distributions. These approaches have been mostly limited to univariate, volumetric data.…”
Section: Introductionmentioning
confidence: 99%