Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.search engine manipulation effect | search rankings | Internet influence | voter manipulation | digital bandwagon effect R ecent research has demonstrated that the rankings of search results provided by search engine companies have a dramatic impact on consumer attitudes, preferences, and behavior (1-12); this is presumably why North American companies now spend more than 20 billion US dollars annually on efforts to place results at the top of rankings (13,14). Studies using eye-tracking technology have shown that people generally scan search engine results in the order in which the results appear and then fixate on the results that rank highest, even when lower-ranked results are more relevant to their search (1-5). Higher-ranked links also draw more clicks, and consequently people spend more time on Web pages associated with higher-ranked search results (1-9). A recent analysis of ∼300 million clicks on one search engine found that 91.5% of those clicks were on the first page of search results, with 32.5% on the first result and 17.6% on the second (7). The study also reported that the bottom item on the first page of results drew 140% more clicks than the first item on the second page (7). These phenomena occur apparently because people trust search engine companies to assign higher ranks to the results best suited to their needs (1-4, 11), even though users generally have no idea how results get ranked (15).Why do search rankings elicit such consistent browsing behavior? Part of the answer lies in the basic design of a search engin...
There is a growing consensus that online platforms have a systematic influence on the democratic process. However, research beyond social media is limited. In this paper, we report the results of a mixed-methods algorithm audit of partisan audience bias and personalization within Google Search. Following Donald Trump's inauguration, we recruited 187 participants to complete a survey and install a browser extension that enabled us to collect Search Engine Results Pages (SERPs) from their computers. To quantify partisan audience bias, we developed a domain-level score by leveraging the sharing propensities of registered voters on a large Twitter panel. We found little evidence for the "filter bubble'' hypothesis. Instead, we found that results positioned toward the bottom of Google SERPs were more left-leaning than results positioned toward the top, and that the direction and magnitude of overall lean varied by search query, component type (e.g. "answer boxes"), and other factors. Utilizing rank-weighted metrics that we adapted from prior work, we also found that Google's rankings shifted the average lean of SERPs to the right of their unweighted average.
One major concern about fake news is that it could damage the public trust in democratic institutions. We examined this possibility using longitudinal survey data combined with records of online behavior. Our study found that online misinformation was linked to lower trust in mainstream media across party lines. However, for moderates and conservatives, exposure to fake news predicted a higher confidence in political institutions. The mostly right-leaning fake news accessed by our moderate-to-conservative respondents could strengthen their trust in a Republican government. This was not true for liberals who could be biased against such content and less likely to believe its claims.
In this work, we introduce a novel metric for auditing group fairness in ranked lists. Our approach offers two benefits compared to the state of the art. First, we offer a blueprint for modeling of user attention. Rather than assuming a logarithmic loss in importance as a function of the rank, we can account for varying user behaviors through parametrization. For example, we expect a user to see more items during a viewing of a social media feed than when they inspect the results list of a single web search query. Second, we allow non-binary protected attributes to enable investigating inherently continuous attributes (e.g., political alignment on the liberal to conservative spectrum) as well as to facilitate measurements across aggregated sets of search results, rather than separately for each result list. By combining these two elements into our metric, we are able to better address the human factors inherent in this problem. We measure the whole sociotechnical system, consisting of a ranking algorithm and individuals using it, instead of exclusively focusing on the ranking algorithm. Finally, we use our metric to perform three simulated fairness audits. We show that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service. Depending on their attention distribution function, a fixed ranking of results can appear biased both in favor and against a protected group. CCS CONCEPTS• Information systems → Page and site ranking; Content ranking; • Human-centered computing → User interface design.KEYWORDS information retrieval; group fairness; ranked lists 1 We use "top" and "high" to refer to the numerically lowest ranks in lists, e.g., rank one, in keeping with the norms of the IR literature [10,27].
Search engines are a primary means through which people obtain information in today's connected world. Yet, apart from the search engine companies themselves, little is known about how their algorithms filter, rank, and present the web to users. This question is especially pertinent with respect to political queries, given growing concerns about filter bubbles, and the recent finding that bias or favoritism in search rankings can influence voting behavior. In this study, we conduct a targeted algorithm audit of Google Search using a dynamic set of political queries. We designed a Chrome extension to survey participants and collect the Search Engine Results Pages (SERPs) and autocomplete suggestions that they would have been exposed to while searching our set of political queries during the month after Donald Trump's Presidential inauguration. Using this data, we found significant differences in the composition and personalization of politically-related SERPs by query type, subjects' characteristics, and date. CCS CONCEPTS INTRODUCTIONRecent concerns surrounding political polarization, fake news, and the impact of media on public opinion have largely focused on social media platforms like Facebook and Twitter. Yet, recent surveys suggest that more news is sought through search engines than social media [2,57], and that search engines are the secondmost likely news gateway to inspire follow-up actions, such as This paper is published under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution.
A recent series of experiments demonstrated that introducing ranking bias to election-related search engine results can have a strong and undetectable influence on the preferences of undecided voters. This phenomenon, called the Search Engine Manipulation Effect (SEME), exerts influence largely through order effects that are enhanced in a digital context. We present data from three new experiments involving 3,600 subjects in 39 countries in which we replicate SEME and test design interventions for suppressing the effect. In the replication, voting preferences shifted by 39.0%, a number almost identical to the shift found in a previously published experiment (37.1%). Alerting users to the ranking bias reduced the shift to 22.1%, and more detailed alerts reduced it to 13.8%. Users' browsing behaviors were also significantly altered by the alerts, with more clicks and time going to lower-ranked search results. Although bias alerts were effective in suppressing SEME, we found that SEME could be completely eliminated only by alternating search results -- in effect, with an equal-time rule. We propose a browser extension capable of deploying bias alerts in real-time and speculate that SEME might be impacting a wide range of decision-making, not just voting, in which case search engines might need to be strictly regulated.
Written by Michelle A. Amazeen, Fabrício Benevenuto, Nadia M. Brashier, Robert M. Bond, Lia C. Bozarth, Ceren Budak, Ullrich K. H. Ecker, Lisa K. Fazio, Emilio Ferrara, Andrew J. Flanagin, Ales-sandro Flammini, Deen Freelon, Nir Grinberg, Ralph Hertwig, Kathleen Hall Jamieson, Kenneth Jo-seph, Jason J. Jones, R. Kelly Garrett, Daniel Kreiss, Shannon McGregor, Jasmine McNealy, Drew Margolin, Alice Marwick, FiIippo Menczer, Miriam J. Metzger, Seungahn Nah, Stephan Lewan-dowsky, Philipp Lorenz-Spreen, Pablo Ortellado, Irene Pasquetto, Gordon Pennycook, Ethan Porter, David G. Rand, Ronald Robertson, Briony Swire-Thompson, Francesca Tripodi, Soroush Vosoughi, Chris Vargo, Onur Varol, Brian E. Weeks, John Wihbey, Thomas J. Wood, & Kai-Cheng Yang
When do people feel comfortable enough to provide honest answers to sensitive questions? Focusing specifically on sexual orientation prevalence-a measure that is sensitive to the pressures of heteronormativity-the present study was conducted to examine the variability in U.S. estimates of non-heterosexual identity prevalence and to determine how comfortable people are with answering questions about their sexual orientation when asked through commonly used survey modes. We found that estimates of non-heterosexual prevalence in the U.S. increased as the privacy and anonymity of the survey increased. Utilizing an online questionnaire, we rank-ordered 16 survey modes by asking people to rate their level of comfort with each mode in the context of being asked questions about their sexual orientation. A demographically diverse sample of 652 individuals in the U.S. rated each mode on a scale from -5 (very uncomfortable) to +5 (very comfortable). Modes included anonymous (name not required) and non-anonymous (name required) versions of questions, as well as self-administered and interviewer-administered versions. Subjects reported significantly higher mean comfort levels with anonymous modes than with non-anonymous modes and significantly higher mean comfort levels with self-administered modes than with interviewer-administered modes. Subjects reported the highest mean comfort level with anonymous online surveys and the lowest with non-anonymous personal interviews that included a video recording. Compared with the estimate produced by an online survey with a nationally representative sample, surveys utilizing more intrusive methodologies may have underestimated non-heterosexual prevalence in the U.S. by between 50 and 414%. Implications for public policy are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.