While automatic text summarization is an area that has received a great deal of attention in recent research, the problem of efficiency in this task has not been frequently addressed. When the size and quantity of documents available on the Internet and from other sources are considered, the need for a highly efficient tool that produces usable summaries is clear. We present a linear-time algorithm for lexical chain computation. The algorithm makes lexical chains a computationally feasible candidate as an intermediate representation for automatic text summarization. A method for evaluating lexical chains as an intermediate step in summarization is also presented and carried out. Such an evaluation was heretofore not possible because of the computational complexity of previous lexical chains algorithms.
Identifying inattentive respondents in self-administered surveys is a challenging goal for survey researchers. Instructed response items (IRIs) provide a measure for inattentiveness in grid questions that is easy to implement. The present article adds to the sparse research on the use and implementation of attention checks by addressing three research objectives. In a first study, we provide evidence that IRIs identify respondents who show an elevated use of straightlining, speeding, item nonresponse, inconsistent answers, and implausible statements throughout a survey. Excluding inattentive respondents, however, did not alter the results of substantive analyses. Our second study suggests that respondents’ inattentiveness partially changes as the context in which they complete the survey changes. In a third study, we present experimental evidence that a mere exposure to an IRI does not negatively or positively affect response behavior within a survey. A critical discussion on using IRI attention checks concludes this article.
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.
Questionnaire design is routinely guided by classic experiments on question form, wording, and context conducted decades ago. This article explores whether two question order effects (one due to the norm of evenhandedness and the other due to subtraction or perceptual contrast) appear in surveys of probability samples in the United States and 11 other countries (Canada, Denmark, Germany, Iceland, Japan, the Netherlands, Norway, Portugal, Sweden, Taiwan, and the United Kingdom; N = 25,640). Advancing theory of question order effects, we propose necessary conditions for each effect to occur, and found that the effects occurred in the nations where these necessary conditions were met. Surprisingly, the abortion question order effect even appeared in some countries in which the necessary condition was not met, suggesting that the question order effect there (and perhaps elsewhere) was not due to subtraction or perceptual contrast. The question order effects were not moderated by education. The strength of the effect due to the norm of evenhandedness was correlated with various cultural characteristics of the nations. Strong support was observed for the form-resistant correlation hypothesis.
Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale. The data came from three probability-based panels: the German GESIS and PPSM panels and the Dutch LISS panel. The survey attitude scale proved to have a replicable three-dimensional factor structure (survey enjoyment, survey value, and survey burden). Partial scalar measurement equivalence was established across three panels that employed two languages (German and Dutch) and three measurement modes (web, telephone, and paper mail). For all three dimensions of the survey attitude scale, the reliability of the corresponding subscales (enjoyment, value, and burden) was satisfactory. Furthermore, the scales correlated with survey response in the expected directions, indicating predictive validity.
Combining surveys and digital trace data can enhance the analytic potential of both data types. We present two studies that examine factors influencing data sharing behaviour of survey respondents for different types of digital trace data: Facebook, Twitter, Spotify and health app data. Across those data types, we compared the relative impact of four factors on data sharing: data sharing method, respondent characteristics, sample composition and incentives. The results show that data sharing rates differ substantially across data types. Two particularly important factors predicting data sharing behaviour are the incentive size and data sharing method, which are both directly related to task difficulty and respondent burden. In sum, the paper reveals systematic variation in the willingness to share additional data which need to be considered in research designs linking surveys and digital traces.
A major challenge in web-based cross-cultural data collection is varying response rates, which can result in low data quality and non-response bias. Country-specific factors such as the political and demographic, economic, and technological factors as well as the socio-cultural environment may have an effect on the response rates to web surveys. This study evaluates web survey response rates using meta-analytical methods based on 110 experimental studies from seven countries. Three dependent variables, so-called effect sizes, are used: the web response rate, the response rate to the comparison survey mode, and the difference between the two response rates. The meta-analysis indicates that four country-specific factors (political and demographic, economic, technological, and socio-cultural) impact the magnitude of web survey response rates. Specifically, web surveys achieve high response rates in countries with high population growth, high internet coverage, and a high survey participation propensity. On the other hand, web surveys are at a disadvantage in countries with a high population age and high cell phone coverage. This study concludes that web surveys can be a reliable alternative to other survey modes due to their consistent response rates and are expected to be used more frequently in national and international settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.