Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.31234/osf.io/zg84s
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Lack of theory building and testing impedes progress in the factor and network literature

Abstract: The last decade has brought reforms to improve methodological practices, with the goal to increase the reliability and replicability of effects. However, explanations of effects remain scarce, and a growing chorus of scholars argues that the replicability crisis has distracted from a crisis of theory. In the same decade, the empirical literature using factor and network models has grown rapidly. I discuss three ways in which this literature falls short of theory building and testing. First, statistical and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
138
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 113 publications
(145 citation statements)
references
References 115 publications
(175 reference statements)
5
138
0
2
Order By: Relevance
“…Failures to replicate can not only be detected but also explained and perhaps even drive theory creation as opposed to just theory rejection. Thus, building a theory explicitly as laid out in Figure 2, even if based on some hypo-and phacking, means once a phenomenon is detected we ascend the path and spend time formalising our account (e.g., Fried, 2020). Using the procedure described by our path modelthat asks for formalisation using specifications and implementations (or indeed anything more meta than an individual study, see Head et al, 2015) -"sins" out of individual scientists' control, such as questionable research practises (QRPs; see John, Loewenstein, & Prelec, 2012) committed by other labs or publication bias committed by the system as a whole, can be both discovered and controlled for in many cases.…”
Section: What Our Path Function Model Offersmentioning
confidence: 99%
“…Failures to replicate can not only be detected but also explained and perhaps even drive theory creation as opposed to just theory rejection. Thus, building a theory explicitly as laid out in Figure 2, even if based on some hypo-and phacking, means once a phenomenon is detected we ascend the path and spend time formalising our account (e.g., Fried, 2020). Using the procedure described by our path modelthat asks for formalisation using specifications and implementations (or indeed anything more meta than an individual study, see Head et al, 2015) -"sins" out of individual scientists' control, such as questionable research practises (QRPs; see John, Loewenstein, & Prelec, 2012) committed by other labs or publication bias committed by the system as a whole, can be both discovered and controlled for in many cases.…”
Section: What Our Path Function Model Offersmentioning
confidence: 99%
“…The network models served not only as a reanalysis of the data and component structure assessed by Freed et al (2017), but also demonstrated how these techniques can be used to study higher-order cognitive abilities and the processes underlying them. As such, rather than proposing a common cause underlying measures of the same construct, as is the case with reflective LVMs, network theory suggests that interactions between individual processes are the driving force behind associations among parallel measures (Fried, 2020;Epskamp, Borsboom, & Fried, 2017;Epskamp & Fried, 2018). This is consistent with POT, which proposes that the positive manifold can be attributed to a sampling of shared domain-general processes (Conway & Kovacs, 2013;Kovacs & Conway, 2016).…”
Section: Discussionmentioning
confidence: 85%
“…However, the primary issue with A second issue was the choice of Freed et al (2017) to employ PCA. This analysis is not an appropriate statistical technique for making theoretical arguments or proposing theoretical solutions to a dataset (Tabachnick & Fidell, 2013) and a problematic measurement model can lead to serious issues with eventual model interpretation (Fried, 2020;Rhemtulla et al, 2019). In fact, PCA reduces the dimensionality of a dataset with no regard to the underlying latent structure of the variables (Osborne, 2014).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations