OCEANS 2015 - Genova 2015
DOI: 10.1109/oceans-genova.2015.7271512
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of oceanographic profiles and climatological regions in the Barents Sea

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Unsupervised classification methods, that is, methods that do not know a priori what the properties of these groups might be, have proven adept at identifying coherent spatial structures within climate data, even when no spatial information is supplied to the algorithm. In studies of ocean and atmospheric data, two commonly used unsupervised classification methods are k-means (Solidoro et al, 2007; Hjelmervik and Hjelmervik, 2013; 2014; Hjelmervik et al, 2015; Sonnewald et al, 2019; Houghton and Wilson, 2020; Yuchechen et al, 2020; Liu et al, 2021) and Gaussian mixture modeling (GMM) (Hannachi and O’Neill, 2001; Hannachi, 2007; Tandeo et al, 2014; Maze et al, 2017a; Jones et al, 2019; Crawford, 2020; Sugiura, 2021; Zhao et al, 2021; Fahrin et al, 2022). K-means attempts to find coherent groups by “cutting” the abstract feature space using hyperplanes, whereas GMM attempts to represent the underlying covariance structure in abstract feature space using a linear combination of multi-dimensional Gaussian functions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Unsupervised classification methods, that is, methods that do not know a priori what the properties of these groups might be, have proven adept at identifying coherent spatial structures within climate data, even when no spatial information is supplied to the algorithm. In studies of ocean and atmospheric data, two commonly used unsupervised classification methods are k-means (Solidoro et al, 2007; Hjelmervik and Hjelmervik, 2013; 2014; Hjelmervik et al, 2015; Sonnewald et al, 2019; Houghton and Wilson, 2020; Yuchechen et al, 2020; Liu et al, 2021) and Gaussian mixture modeling (GMM) (Hannachi and O’Neill, 2001; Hannachi, 2007; Tandeo et al, 2014; Maze et al, 2017a; Jones et al, 2019; Crawford, 2020; Sugiura, 2021; Zhao et al, 2021; Fahrin et al, 2022). K-means attempts to find coherent groups by “cutting” the abstract feature space using hyperplanes, whereas GMM attempts to represent the underlying covariance structure in abstract feature space using a linear combination of multi-dimensional Gaussian functions.…”
Section: Introductionmentioning
confidence: 99%
“…The most commonly used statistical criterion is the minimum in the Bayesian information criterion (BIC; Schwarz, 1978), used in Fahrin et al (2022), Sugiura (2021), Zhao et al (2021), Jones et al (2019), Maze et al (2017b), Hjelmervik and Hjelmervik (2013, 2014), Hjelmervik et al (2015), and Sonnewald et al (2019). The BIC is comprised of two terms: a term that rewards the statistical likelihood of the model and a term that penalizes overfitting.…”
Section: Introductionmentioning
confidence: 99%