The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis.In the factor analysis literature, much attention has be;;n given to the issue of sample size. It is widely understood that the use of larger samples in applications of factor analysis tends to provide results such that sample factor loadings are more precise estimates of population loadings and are also more stable, or les s variable, across repeated sampling. Despite general agreement on this matter, there is considerable di'/ergence of opinion and evidence about the question of how large a sample is necessary to adequately acnieve these objectives. Recommendations and findings about this issue are diverse and often contradictory. The objectives of this article are to provide a
The authors examine the practice of dichotomization of quantitative measures, wherein relationships among variables are examined after 1 or more variables have been converted to dichotomous variables by splitting the sample at some point on the scale(s) of measurement. A common form of dichotomization is the median split, where the independent variable is split at the median to form high and low groups, which are then compared with respect to their means on the dependent variable. The consequences of dichotomization for measurement and statistical analyses are illustrated and discussed. The use of dichotomization in practice is described, and justifications that are offered for such usage are examined. The authors present the case that dichotomization is rarely defensible and often will yield misleading results.We consider here some simple statistical procedures for studying relationships of one or more independent variables to one dependent variable, where all variables are quantitative in nature and are measured on meaningful numerical scales. Such measures are often referred to as individual-differences measures, meaning that observed values of such measures are interpretable as reflecting individual differences on the attribute of interest. It is of course straightforward to analyze such data using correlational methods. In the case of a single independent variable, one can use simple linear regression and/or obtain a simple correlation coefficient. In the case of multiple independent variables, one can use multiple regression, possibly including interaction terms. Such methods are routinely used in practice.However, another approach to analysis of such data is also rather widely used. Considering the case of one independent variable, many investigators begin by converting that variable into a dichotomous variable by splitting the scale at some point and designating individuals above and below that point as defining two separate groups. One common approach is to split the scale at the sample median, thereby defining high and low groups on the variable in question; this approach is referred to as a median split. Alternatively, the scale may be split at some other point based on the data (e.g., 1 standard deviation above the mean) or at a fixed point on the scale designated a priori. Researchers may dichotomize independent variables for many reasons-for example, because they believe there exist distinct groups of individuals or because they believe analyses or presentation of results will be simplified. After such dichotomization, the independent variable is treated as a categorical variable and statistical tests then are carried out to determine whether there is a significant difference in the mean of the dependent variable for the two groups represented by the dichotomized independent variable. When there are two independent variables, researchers often dichotomize both and then analyze effects on the dependent variable using analysis of variance (ANOVA).There is a considerable methodological literature exam...
Background: The coronavirus disease 2019 (COVID-19) outbreak originating in Wuhan, Hubei province, China, coincided with chunyun, the period of mass migration for the annual Spring Festival. To contain its spread, China adopted unprecedented nationwide interventions on January 23 2020. These policies included large-scale quarantine, strict controls on travel and extensive monitoring of suspected cases. However, it is unknown whether these policies have had an impact on the epidemic. We sought to show how these control measures impacted the containment of the epidemic. Methods: We integrated population migration data before and after January 23 and most updated COVID-19 epidemiological data into the Susceptible-Exposed-Infectious-Removed (SEIR) model to derive the epidemic curve. We also used an artificial intelligence (AI) approach, trained on the 2003 SARS data, to predict the epidemic. Results: We found that the epidemic of China should peak by late February, showing gradual decline by end of April. A five-day delay in implementation would have increased epidemic size in mainland China three-fold. Lifting the Hubei quarantine would lead to a second epidemic peak in Hubei province in mid-March and extend the epidemic to late April, a result corroborated by the machine learning prediction. Conclusions: Our dynamic SEIR model was effective in predicting the COVID-19 epidemic peaks and sizes. The implementation of control measures on January 23 2020 was indispensable in reducing the eventual COVID-19 epidemic size.
The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method of selecting among mathematical models of cognition known as minimum description length, which provides an intuitive and theoretically well-grounded understanding of why one model should be chosen. A central but elusive concept in model selection, complexity, can also be derived with the method. The adequacy of the method is demonstrated in 3 areas of cognitive modeling: psychophysics, information integration, and categorization.How should one choose among competing theoretical explanations of data? This question is at the heart of the scientific enterprise, regardless of whether verbal models are being tested in an experimental setting or computational models are being evaluated in simulations. A number of criteria have been proposed to assist in this endeavor, summarized nicely by Jacobs and Grainger (1994). They include (a) plausibility (are the assumptions of the model biologically and psychologically plausible?); (b) explanatory adequacy (is the theoretical explanation reasonable and consistent with what is known?); (c) interpretability (do the model and its parts-e.g., parameters-make sense? are they understandable?); (d) descriptive adequacy (does the model provide a good description of the observed data?); (e) generalizability (does the model predict well the characteristics of data that will be observed in the future?); and (f) complexity (does the model capture the phenomenon in the least complex-i.e., simplest-possible manner?).The relative importance of these criteria may vary with the types of models being compared. For example, verbal models are likely to be scrutinized on the first three criteria just as much as the last three to thoroughly evaluate the soundness of the models and their assumptions. Computational models, on the other hand, may have already satisfied the first three criteria to a certain level of acceptability earlier in their evolution, leaving the last three criteria to be the primary ones on which they are evaluated. This emphasis on the latter three can be seen in the development of quantitative methods designed to compare models on these criteria. These methods are the topic of this article.In the last two decades, interest in mathematical models of cognition and other psychological processes has increased tremendously. We view this as a positive sign for the discipline, for it suggests that this method of inquiry holds considerable promise. Among other things, a mathematical instantiation of a theory provides a test bed in which researchers can examine the detailed interactions of a model's parts with a level of precision that is not possible with verbal models. Furthermore, through systematic eval...
SUMMARY Interactions between tumorigenic cells and their surrounding microenvironment are critical for tumor progression yet remain incompletely understood. Germline mutations in the NF1 tumor suppressor gene cause neurofibromatosis type 1 (NF1), a common genetic disorder characterized by complex tumors called neurofibromas. Genetic studies indicate that biallelic loss of Nf1 is required in the tumorigenic cell of origin in the embryonic Schwann cell lineage. However, in the physiologic state, Schwann cell loss of heterozygosity is not sufficient for neurofibroma formation and Nf1 haploinsufficiency in at least one additional nonneoplastic lineage is required for tumor progression. Here, we establish that Nf1 heterozygosity of bone marrow-derived cells in the tumor microenvironment is sufficient to allow neurofibroma progression in the context of Schwann cell Nf1 deficiency. Further, genetic or pharmacologic attenuation of c-kit signaling in Nf1+/− hematopoietic cells diminishes neurofibroma initiation and progression. Finally, these studies implicate mast cells as critical mediators of tumor initiation.
The majority of lung adenocarcinoma patients with epidermal growth factor receptor-(EGFR) mutated or EML4-ALK rearrangement-positive tumors are sensitive to tyrosine kinase inhibitors. Both primary and acquired resistance in a significant number of those patients to these therapies remains a major clinical problem. The specific molecular mechanisms associated with tyrosine kinase inhibitor resistance are not fully understood. Clinicopathological observations suggest that molecular alterations involving so-called 'driver mutations' could be used as markers that aid in the selection of patients most likely to benefit from targeted therapies. In this review, we summarize recent developments involving the specific molecular mechanisms and markers that have been associated with primary and acquired resistance to EGFR-targeted therapy in lung adenocarcinomas. Understanding these mechanisms may provide new treatment avenues and improve current treatment algorithms.
Renal cell carcinoma (RCC) associated with Xp11.2 translocation is uncommon, characterized by several different translocations involving the TFE3 gene. We assessed the utility of break-apart fluorescence in situ hybridization (FISH) in establishing the diagnosis for suspected or unclassified cases with negative or equivocal TFE3 immunostaining by analyzing 24 renal cancers with break-apart TFE3 FISH and comparing the molecular findings with the results of TFE3 and cathepsin K immunostaining in the same tumors. Ten tumors were originally diagnosed as Xp11.2 RCC on the basis of positive TFE3 immunostaining, and 14 were originally considered unclassified RCCs with negative or equivocal TFE3 staining, but with a range of features suspicious for Xp11.2 RCC. Seventeen cases showed TFE3 rearrangement associated with Xp11.2 translocation by FISH, including all 13 tumors with moderate or strong TFE3 (n=10) or cathepsin K (n=7) immunoreactivity. FISH-positive cases showed negative or equivocal immunoreactivity for TFE3 or cathepsin K in 7 and 10 tumors, respectively (both=3). None had positive immunohistochemistry but negative FISH. Morphologic features were typical for Xp11.2 RCC in 10/17 tumors. Unusual features included 1 melanotic Xp11.2 renal cancer, 1 tumor with mixed features of Xp11.2 RCC and clear cell RCC, and other tumors mimicking clear cell RCC, multilocular cystic RCC, or high-grade urothelial carcinoma. Morphology mimicking high-grade urothelial carcinoma has not been previously reported in these tumors. Psammoma bodies, hyalinized stroma, and intracellular pigment were preferentially identified in FISH-positive cases compared with FISH-negative cases. Our results support the clinical application of a TFE3 break-apart FISH assay for diagnosis and confirmation of Xp11.2 RCC and further expand the histopathologic spectrum of these neoplasms to include tumors with unusual features. A renal tumor with pathologic or clinical features highly suggestive of translocation-associated RCC but exhibiting negative or equivocal TFE3 immunostaining should be evaluated by TFE3 FISH assay to fully assess this possibility.
Activatable theranostic nanomedicines involved in photothermal therapy (PTT) have received constant attention as promising alternatives to traditional therapies in clinic. However, the theranostic nanomedicines widely suffer from instability and complicated nanostructures, which hamper potential clinical applications. Herein, we demonstrated a terrylenediimide (TDI)-poly(acrylic acid) (TPA)-based nanomedicine (TNM) platform used as an intrinsic theranostic agent. As an exploratory paradigm in seeking biomedical applications, TDI was modified with poly(acrylic acid)s (PAAs), resulting in eight-armed, star-like TPAs composed of an outside hydrophilic PAA corona and an inner hydrophobic TDI core. TNMs were readily fabricated via spontaneous self-assembly. Without additional vehicle and cargo, the as-prepared TNMs possessed a robust nanostructure and high photothermal conversion efficiency up to approximately 41%. The intrinsic theranostic properties of TNMs for use in photoacoustic (PA) imaging by a multispectral optoacoustic tomography system and in mediating photoinduced tumor ablation were intensely explored. Our results suggested that the TNMs could be successfully exploited as intrinsic theranostic agents for PA imaging-guided efficient tumor PTT. Thus, these TNMs hold great potential for (pre)clinical translational development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.