Confidence intervals (CIs) are an alternative to null hypothesis significance testing (NHST) (Nickerson 2000; Wood 2005); both techniques essentially convey information about how certain we are of an estimate. A CI consists of two values, upper and lower confidence limits, associated with a confidence level, typically 95%. Valid CIs should satisfy two important conditions (DiCiccio and Efron 1996). First, a CI should contain the population value of the parameter under estimation with the stated degree of confidence over a large number of repeated samples. For example, a 95% CI should contain the population value of a parameter 95% of the time, with the true value of the parameter falling outside of the interval only in 5% of the cases. Second, in those cases where the true value of the parameter falls beyond the boundaries of the interval, it should do so in a balanced way. Using again the 95% CI as an example, the population value should be higher than the upper boundary in 2.5% of the samples and lower than the lower boundary of the interval 2.5% of the time. We illustrate these two properties of CIs in Figure A1. The figure shows 250 correlation estimates drawn from three different populations (where the true value of the correlation is 0, 0.3, or 0.6, respectively), ordered from smallest to largest, and their 95% CIs. In this scenario, the population value falls outside the CI only about 5% of the time and does so in a balanced way, such that the population value lies above the CI 2.5% of the time and below the CI 2.5% of the time. The figure also shows that both the variance of the estimates and the width of the CIs depend on the population value of the correlation; when the population correlation is zero, the difference between the largest and smallest estimate is close to 0.5, but when the population value is 0.6 this difference decreases to about 0.35. Similarly, the CIs are narrower for larger estimates. This is an important feature of CIs that unfortunately complicates their calculation, as we discuss later. The CI of a correlation has a valid closed form solution, but estimating CIs for more complex scenarios is a non-trivial problem. The most straightforward way to estimate CIs is to use a known theoretical distribution. We refer to these as parametric approaches. When the distribution of the estimates is not known, as is the case with those obtained from PLSc, CIs based on bootstrapping provide an attractive alternative (Wood 2005). In these approaches, which we refer to as empirical, the endpoints of the CIs are not taken from a known statistical distribution, but rather the values are obtained from the empirically approximated bootstrap distribution. Bootstrapping means that we draw a large number of samples from our original data and calculate the statistic for each sample. The samples are drawn with replacement, which means that each observation in the original sample can be included in each bootstrap sample multiple times. While bootstrapping can be useful when working with statistics whose samp...
Lack of careful consideration of common method effects in empirical research can lead to several negative consequences for the interpretation of research outcomes, such as biased estimates of the validity and reliability of the measures employed as well as bias in the estimates of the relationships between constructs of interest, which in turn can affect hypothesis testing. Taken together, these make it very difficult to make any interpretations of the results when those are affected by substantive common method effects. In the literature, there are several preventive, detective, and corrective techniques that can be employed to assuage concerns about the possibility of common method effects underlying observed results. Among these, the most popular has been Harman's Single-Factor Test. Though researchers have argued against its effectiveness in the past, the technique has continued to be very popular in the discipline. Moreover, there is a dearth of empirical evidence on the actual effectiveness of the technique, which we sought to remedy with this research. Our results, based on extensive Monte Carlo simulations, indicate that the approach shows limited effectiveness in detecting the presence of common method effects and may thus be providing a false sense of security to researchers. We therefore argue against the use of the technique moving forward and provide evidence to support our position.
Partial least squares (PLS) is one of the most popular analytical techniques employed in the information systems field. In recent years, researchers have begun to revisit commonly used rules-of-thumb about the minimum sample sizes required to obtain reliable estimates for the parameters of interest in structural research models. Of particular importance in this regard is the a priori assessment of statistical power, which provides valuable information to be used in the design and planning of research studies. Though the importance of conducting such analyses has been recognized for quite some time, a review of the empirical research employing PLS indicates that they are not regularly conducted or reported. One likely reason is the lack of software support for these analyses in popular PLS packages. In this tutorial, we address this issue by providing, in tutorial form, the steps and code necessary to easily conduct such analyses. We also provide guidance on the reporting results.
Transforming variables before analysis or applying a transformation as a part of a generalized linear model are common practices in organizational research. Several methodological articles addressing the topic, either directly or indirectly, have been published in the recent past. In this article, we point out a few misconceptions about transformations and propose a set of eight simple guidelines for addressing them. Our main argument is that transformations should not be chosen based on the nature or distribution of the individual variables but based on the functional form of the relationship between two or more variables that is expected from theory or discovered empirically. Building on a systematic review of six leading management journals, we point to several ways the specification and interpretation of nonlinear models can be improved.
Researchers in a number of disciplines, including Information Systems, have argued that much of past research may have incorrectly specified the relationship between latent variables and indicators as reflective when an understanding of a construct and its measures indicates that a formative specification would have been warranted. Coupled with the posited severe biasing effects of construct misspecification on structural parameters, these two assertions would lead to concluding that an important portion of our literature is largely invalid. While we do not delve into the issue of when one specification should be employed over another, our work here contends that construct misspecification, but with a particular exception, does not lead to severely biased estimates. We argue, and show through extensive simulations, that a lack of attention to the metric in which relationships are expressed is responsible for the current belief in the negative effects of misspecification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.