Previous results, based on inhibition of fertilization by an anti–α6 integrin mAb (GoH3), suggest that the α6β1 integrin on mouse eggs functions as the receptor for sperm (Almeida, E.A., A.P. Huovila, A.E. Sutherland, L.E. Stephens, P.G. Calarco, L.M. Shaw, A.M. Mercurio, A. Sonnenberg, P. Primakoff, D.G. Myles, and J.M. White. 1995. Cell. 81:1095–1104). Because the egg surface tetraspanin CD9 is essential for gamete fusion (Kaji, K., S. Oda, T. Shikano, T. Ohnuki, Y. Uematsu, J. Sakagami, N. Tada, S. Miyazaki, and A. Kudo. 2000. Nat. Genet. 24:279–282; Le Naour, F., E. Rubinstein, C. Jasmin, M. Prenant, and C. Boucheix. 2000. Science. 287:319–321; Miyado, K., G. Yamada, S. Yamada, H. Hasuwa, Y. Nakamura, F. Ryu, K. Suzuki, K. Kosai, K. Inoue, A. Ogura, M. Okabe, and E. Mekada. 2000. Science. 287:321–324) and CD9 is known to associate with integrins, recent models of gamete fusion have posited that egg CD9 acts in association with α6β1 in fusion (Chen, M.S., K.S. Tung, S.A. Coonrod, Y. Takahashi, D. Bigler, A. Chang, Y. Yamashita, P.W. Kincade, J.C. Herr, and J.M. White. 1999. Proc. Natl. Acad. Sci. USA. 96:11830–11835; Kaji, K., S. Oda, T. Shikano, T. Ohnuki, Y. Uematsu, J. Sakagami, N. Tada, S. Miyazaki, and A. Kudo. 2000. Nat. Genet. 24:279–282; Le Naour, F., E. Rubinstein, C. Jasmin, M. Prenant, and C. Boucheix. 2000. Science. 287:319–321; Miyado, K., G. Yamada, S. Yamada, H. Hasuwa, Y. Nakamura, F. Ryu, K. Su- zuki, K. Kosai, K. Inoue, A. Ogura, M. Okabe, and E. Mekada. 2000. Science. 287:321–324). Using eggs from cultured ovaries of mice lacking the α6 integrin subunit, we found that the fertilization rate, fertilization index, and sperm binding were not impaired compared with wild-type or heterozygous controls. Furthermore, a reexamination of antibody inhibition, using an assay that better simulates in vivo fertilization conditions, revealed no inhibition of fusion by the GoH3 mAb. We also found that an anti-CD9 mAb completely blocks sperm fusion with either wild-type eggs or eggs lacking α6β1. Based on these results, we conclude that the α6β1 integrin is not essential for sperm–egg fusion, and we suggest a new model in which CD9 acts by itself, or interacts with egg protein(s) other than α6β1, to function in sperm–egg fusion.
When evaluating cognitive models based on fits to observed data (or, really, any model that has free parameters), parameter estimation is critically important. Traditional techniques like hill climbing by minimizing or maximizing a fit statistic often result in point estimates. Bayesian approaches instead estimate parameters as posterior probability distributions, and thus naturally account for the uncertainty associated with parameter estimation; Bayesian approaches also offer powerful and principled methods for model comparison. Although software applications such as WinBUGS (Lunn, Thomas, Best, & Spiegelhalter, Statistics and Computing, 10, 325–337, 2000) and JAGS (Plummer, 2003) provide “turnkey”-style packages for Bayesian inference, they can be inefficient when dealing with models whose parameters are correlated, which is often the case for cognitive models, and they can impose significant technical barriers to adding custom distributions, which is often necessary when implementing cognitive models within a Bayesian framework. A recently developed software package called Stan (Stan Development Team, 2015) can solve both problems, as well as provide a turnkey solution to Bayesian inference. We present a tutorial on how to use Stan and how to add custom distributions to it, with an example using the linear ballistic accumulator model (Brown & Heathcote, Cognitive Psychology, 57, 153–178. doi:10.1016/j.cogpsych.2007.12.002, 2008).
We develop a cognitive modeling approach, motivated by classic theories of knowledge representation and judgment from psychology, for combining people's rankings of items. The model makes simple assumptions about how individual differences in knowledge lead to observed ranking data in behavioral tasks. We implement the cognitive model as a Bayesian graphical model, and use computational sampling to infer an aggregate ranking and measures of the individual expertise. Applications of the model to 23 data sets, dealing with general knowledge and prediction tasks, show that the model performs well in producing an aggregate ranking that is often close to the ground truth and, as in the “wisdom of the crowd” effect, usually performs better than most of individuals. We also present some evidence that the model outperforms the traditional statistical Borda count method, and that the model is able to infer people's relative expertise surprisingly well without knowing the ground truth. We discuss the advantages of the cognitive modeling approach to combining ranking data, and in wisdom of the crowd research generally, as well as highlighting a number of potential directions for future model development.
One of the more principled methods of performing model selection is via Bayes factors. However, calculating Bayes factors requires marginal likelihoods, which are integrals over the entire parameter space, making estimation of Bayes factors for models with more than a few parameters a significant computational challenge. Here, we provide a tutorial review of two Monte Carlo techniques rarely used in psychology that efficiently compute marginal likelihoods: thermodynamic integration (Friel & Pettitt, 2008;Lartillot & Philippe, 2006) and steppingstone sampling (Xie, Lewis, Fan, Kuo, & Chen, 2011). The methods are general and can be easily implemented in existing MCMC code; we provide both the details for implementation and associated R code for the interested reader. While Bayesian toolkits implementing standard statistical analyses (e.g., JASP Team, 2017;Morey & Rouder, 2015) often compute Bayes factors for the researcher, those using Bayesian approaches to evaluate cognitive models are usually left to compute Bayes factors for themselves. Here, we provide examples of the methods by computing marginal likelihoods for a moderately complex model of choice response time, the Linear Ballistic Accumulator model (Brown & Heathcote, 2008), and compare them to findings of Evans and Brown (2017), who used a brute force technique. We then present a derivation of TI and SS within a hierarchical framework, provide results of a model recovery case study using hierarchical models, and show an application to empirical data. A companion R package is available at the Open Science Framework: https://osf.io/jpnb4. Formal cognitive models that attempt to explain cognitive processes using mathematics and simulation have been a cornerstone of scientific progress in the field of cognitive psychology. When presented with several competing cognitive models, a researcher aims to select between these different explanations in order to determine which model provides the most compelling explanation of the underlying processes. This is not as simple as selecting the model that provides the best quantitative fit to the empirical data: Models that are more complex have greater amounts of flexibility and can over-fit the noise in the data (Myung,
We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering problems. In these problems, people must order a set of items in terms of a given criterion (e.g., ordering American holidays through the calendar year). Using a cognitive model of behavior on this problem that allows for individual differences in knowledge, we are able to infer people's expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken both before and after completing the ordering of items, in terms of correlation with the actual accuracy of the answers. These results apply to six general knowledge tasks, like ordering American holidays, and two prediction tasks, involving sporting and television competitions. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.