2017
DOI: 10.1017/s026646661700007x
|View full text |Cite
|
Sign up to set email alerts
|

Identification of Joint Distributions in Dependent Factor Models

Abstract: This paper studies linear factor models that have arbitrarily dependent factors. Assuming that the coefficients are known and that their matrix representation satisfies rank conditions, we identify the nonparametric joint distribution of the unobserved factors using first and then second-order partial derivatives of the log characteristic function of the observed variables. In conjunction with these identification strategies the mean and variance of the vector of factors are identified. The main result provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…a) at most one source distribution is Gaussian (see e.g. Comon, 1994; Hyvarinen et al ., 2001; Eriksson and Koivunen, 2004 Th 5. iii; Chan et al ., 2006; Ben‐Moshe, 2018b) the moments exist up to order three, four or six (see, e.g., Comon and De Lathauwer, 2010; Erickson et al ., 2014c) the characteristic functions of sources have no exponential factor (Eriksson and Koivunen, 2004, Th 5, ii), iv), d) restrictions on the support or exclusion restrictions are imposed (Williams, 2020; Ben‐Moshe, 2021). Moreover, the number of sources can be assumed upper bounded (De Lathauwer, 2008; Bonhomme and Robin, 2010; Zinde‐Walsh, 2013, Section 4.2.2.3), or Bayesian approaches can be used(Klepper and Leamer, 1984; Leonard, 2011).…”
Section: Identification In a Multi‐variate Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…a) at most one source distribution is Gaussian (see e.g. Comon, 1994; Hyvarinen et al ., 2001; Eriksson and Koivunen, 2004 Th 5. iii; Chan et al ., 2006; Ben‐Moshe, 2018b) the moments exist up to order three, four or six (see, e.g., Comon and De Lathauwer, 2010; Erickson et al ., 2014c) the characteristic functions of sources have no exponential factor (Eriksson and Koivunen, 2004, Th 5, ii), iv), d) restrictions on the support or exclusion restrictions are imposed (Williams, 2020; Ben‐Moshe, 2021). Moreover, the number of sources can be assumed upper bounded (De Lathauwer, 2008; Bonhomme and Robin, 2010; Zinde‐Walsh, 2013, Section 4.2.2.3), or Bayesian approaches can be used(Klepper and Leamer, 1984; Leonard, 2011).…”
Section: Identification In a Multi‐variate Systemmentioning
confidence: 99%
“…One may proceed further to the estimation of the density function itself in Step 4. Step 4: If ε1,t is a continuous variable, its density can be obtained through a kernel regularized Fourier transform, to have a well‐posed inverse problem (Ben‐Moshe, 2018, eq.22 and p150):…”
Section: Non‐parametric Estimation Of Distributions Of Sourcesmentioning
confidence: 99%
“…Lastly, note that the methods introduced in this paper naturally deliver counterparts to the Mallows algorithm for other models beyond deconvolution, such as general linear independent factor models. 8 Strictly speaking, Mallows (2007) 1 for all i = 1, ..., N at the end of step s, and then applies the random permutation σ (s+1) to the new X (s+1) values. This difference with the algorithm outlined here turns out to be immaterial, since the composition of σ (s+1) and σ (s) is also a random permutation of {1, ..., N }.…”
Section: Comparison To Mallows (2007)mentioning
confidence: 99%