2021 IEEE International Conference on Data Mining (ICDM) 2021
DOI: 10.1109/icdm51629.2021.00088
|View full text |Cite
|
Sign up to set email alerts
|

A Statistically-Guided Deep Network Transformation and Moderation Framework for Data with Spatial Heterogeneity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Due to the increased resemblance between neighboring data samples, spatial autocorrelation violates the independence principle. On the other hand, because the data generating processes frequently change with respect to space, spatial heterogeneity violates the identical distribution assumption [ 109 ]. To overcome spatial autocorrelation and spatial heterogeneity issues, the process of choosing flood and non-flood points was done iteratively and the set of points which the p -value of Moran’s I for all factors was obtained extremely close to zero and less than the threshold (i.e., 0.05) was chosen to create the flood inventory map.…”
Section: Discussionmentioning
confidence: 99%
“…Due to the increased resemblance between neighboring data samples, spatial autocorrelation violates the independence principle. On the other hand, because the data generating processes frequently change with respect to space, spatial heterogeneity violates the identical distribution assumption [ 109 ]. To overcome spatial autocorrelation and spatial heterogeneity issues, the process of choosing flood and non-flood points was done iteratively and the set of points which the p -value of Moran’s I for all factors was obtained extremely close to zero and less than the threshold (i.e., 0.05) was chosen to create the flood inventory map.…”
Section: Discussionmentioning
confidence: 99%
“…S * can be efficiently solved using the Linear-Time Subset Scanning (LTSS) property [Neill, 2012;Xie et al, 2021c] combined with coordinate ascent. More solution details and results on regression are available in [Xie et al, 2021a]. -Active significance testing with learning: Once the optimal S * is identified, the current node H i j will be temporarily split into two children H i+1 j1 and H i+1 j2 , where one child corresponds to S * and the other for the rest of the space in H i j .…”
Section: Statistically-guided Transformationmentioning
confidence: 99%
“…Specifically, we carry out two training scenarios with and without the split (Fig. 2) and perform an upper-tailed dependent T-test on the losses from the two scenarios [Xie et al, 2021a]. For the split scenario a new network branch will be created (e.g., adding a copy of the last L layers to the network) to allow private parameters (Fig.…”
Section: Statistically-guided Transformationmentioning
confidence: 99%
See 1 more Smart Citation
“…Previously, we developed a spatial-variability-aware neural network (SVANN) [5,6] that uses locationdependent weights and found that it could better model variability than one-size-fits-all (OSFA) approaches. Other methods [7,8] learn space partitions of the heterogeneous data and develop models tailored to the resulting homogeneous regions. However, these approaches either require dense training data for each spatial domain or struggle when significant variability exists within a domain.…”
Section: Introductionmentioning
confidence: 99%