We seek to understand how the experiences of groups that differ in gender, ethnicity, and sexual orientation produce college-level educational performances that differ from the experiences of the dominant majority group. We employ two datasets: a National Database of 24,701 participants and a Paired-Measures Database with 3,323 participants. Both datasets provide demographic information, socioeconomic conditions of status as first-generation student, English as a first language, and interest in majoring in science, and competency scores on understanding science as a way of knowing obtained from the Science Literacy Concept Inventory. The Paired-Measures Database includes additional selfassessed competence ratings that enabled quantifying affective confidence. We meld the ways of knowing of ethics, numeracy, and social justice, especially the social justice concept of Othering, to interpret our data. Two of three competing hypotheses about self-assessment encourage Othering. Our data strongly support the third-that all groups are good at self-assessment and merit equal respect. Women and men are equally competent in science literacy. Women, on average, are more accurate in their self-assessments whereas men, on average, are overconfident. Those with minority sexual orientations register higher competence than the binary-sexual majority but are less confident of their competency. Minority ethnicities, on average, produce significantly lower science literacy scores. With one exception (Middle Eastern), groups produce mean self-assessed competence ratings that are remarkably accurate predictors of their mean competence scores. The three socioeconomic conditions exert significant and unequal impacts across ethnic groups, with Hispanic, Middle Eastern and Pacific Islander data providing some unique results.
Despite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produced by the paired measures that researchers employ to determine self-assessment accuracy. To illustrate the complications and ways to resolve them, we employ a large dataset (N = 1154) obtained from paired measures of documented reliability to study self-assessed proficiency in science literacy. We collected demographic information that allowed both criterion-referenced and normative-based analyses of selfassessment data. We used these analyses to propose a quantitatively based classification scale and show how its use informs the nature of self-assessment. Much of the current consensus about peoples' inability to self-assess accurately comes from interpreting normative data presented in the Kruger-Dunning type graphical format or closely related (y -x) vs. (x) graphical conventions. Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment. Our results further confirm that experts are more proficient in self-assessing their abilities than novices and that women, in general, self-assess more accurately than men. The validity of interpretations of data depends strongly upon how carefully the researchers consider the numeracy that underlies graphical presentations and conclusions. Our results indicate that carefully measured self-assessments provide valid, measurable and valuable information about proficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.