2013
DOI: 10.1016/j.jmva.2011.05.015
|View full text |Cite
|
Sign up to set email alerts
|

The multilinear normal distribution: Introduction and some basic properties

Abstract: a b s t r a c tIn this paper, the multilinear normal distribution is introduced as an extension of the matrix-variate normal distribution. Basic properties such as marginal and conditional distributions, moments, and the characteristic function, are also presented. A trilinear example is used to explain the general contents at a simpler level. The estimation of parameters using a flip-flop algorithm is also briefly discussed.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
44
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(45 citation statements)
references
References 31 publications
(38 reference statements)
1
44
0
Order By: Relevance
“…One of the most important applications of multiway tensor analysis and multilinear distributions, is magnetic resonance imaging (MRI) (we refer to [46] and the references therein). For multiway arrays, we often use multilinear (array or tensor) normal distributions that correspond to the multivariate normal (Gaussian) distributions in (110) and (111) with common means µ 1 = µ 2 and separable (Kronecker structured) covariance matrices:P…”
Section: Multiway Divergences For Multivariate Normal Distributions Wmentioning
confidence: 99%
“…One of the most important applications of multiway tensor analysis and multilinear distributions, is magnetic resonance imaging (MRI) (we refer to [46] and the references therein). For multiway arrays, we often use multilinear (array or tensor) normal distributions that correspond to the multivariate normal (Gaussian) distributions in (110) and (111) with common means µ 1 = µ 2 and separable (Kronecker structured) covariance matrices:P…”
Section: Multiway Divergences For Multivariate Normal Distributions Wmentioning
confidence: 99%
“…The computations required are typically not too onerous, since for example the Hessian matrix is (v + 1) × (v + 1) (i.e., of order log n by log n), but there is quite complicated non-linearity involved in the definition of the QMLE and so it is not so easy to analyse from a theoretical point of view. See Singull et al (2012) and Ohlson et al (2013) for discussion of estimation algorithms in the case where the data are multi-array and v is of low dimension.…”
Section: The Quasi-maximum Likelihood Estimatormentioning
confidence: 99%
“…There is also a growing Bayesian and Frequentist literature on multiway array or tensor datasets, where this structure is commonly employed. See for example Akdemir and Gupta (2011), Allen (2012), Browne, MacCallum, Kim, Andersen, and Glaser (2002), Cohen, Usevich, and Comon (2016), Constantinou, Kokoszka, and Reimherr (2015), Dobra (2014), Fosdick and Hoff (2014), Gerard and Hoff (2015), Hoff (2011), Hoff (2015, Hoff (2016), Krijnen (2004), Leiva and Roy (2014), Leng and Tang (2012), Li and Zhang (2016), Manceura and Dutilleul (2013), Ning and Liu (2013), Ohlson, Ahmada, and von Rosen (2013), Singull, Ahmad, and von Rosen (2012), Volfovsky and Hoff (2014), Volfovsky and Hoff (2015), and Yin and Li (2012). In both these (apparently separate) literatures the dimension n is fixed and typically there are a small number of products each of whose dimension is of fixed but perhaps moderate size.…”
Section: Introductionmentioning
confidence: 99%
“…For example Basser and Pajevic (2003) argued on the need to go from the vectorial treatment of some complex data sets to tensor treatment in order to avoid wrong or inefficient conclusions. The Bayesian and the likelihood based approaches are the most used techniques to obtain estimators of unknown parameters in the tensor normal model, see for example (Hoff, 2011, Ohlson et al, 2013. For the third order tensor normal distribution the estimators can be found using the MLE-3D algorithm by Manceur and Dutilleul (2013) or similar algorithms like one proposed by Singull et al (2012).…”
Section: It Follows Thatmentioning
confidence: 99%