Screening is considered a necessary mechanism for alleviating information asymmetry but has also raised concerns about increased discrimination in online peer-to-peer market platforms. Paradoxically, providers of goods and services may also voluntarily forgo screening, even though it increases the risks and costs associated with poor matches. We examine who may choose to forgo screening and why, and its impact on their performance outcomes. Our study’s empirical context is the Airbnb platform, wherein the “Instant Book” feature enables hosts to forgo the screening of guests. Utilizing a unique panel dataset of all listings in New York City during August 2015-February 2017, we first explore the antecedents of voluntarily switching to instant booking and then causally identify the impacts of switching. Our study provides evidence of the economic benefits of forgoing screening from increased occupancy even as review ratings decline; these effects are stronger for Black and female hosts. We discuss the strategic and social welfare implications of these findings within the context of current conversations on discrimination and bias in the sharing economy.
Problem definition: Innovation contest platforms are often organized around specific fields and host contests that span a variety of interdependent problem domains. Whereas contestants may benefit from related experience in contests whose problem domains share an interdependency with the focal problem domain, it is unclear whether the benefits of related experience arise symmetrically from upstream experience (i.e., experience in problem domains that provide input information to the focal problem domain) and downstream experience (i.e., experience in problem domains that use output information from the focal problem domain) or differ among them. Academic/practical relevance: Given that innovation contest platforms serve to effectively match contest problem requirements with contestants’ skills, it is important to understand how a contestant’s prior experience on a platform contributes to her problem-solving performance. Our research provides a more granular examination of the benefits of related experience than what has been examined in prior studies on individual learning or innovation contests. Methodology: We collected detailed archival data from TopCoder, a leading innovation contest platform that hosts contests across multiple interdependent software development problem domains, from its launch in 2001 to September 2013. Our data set comprises detailed participation histories of 821 contestants in 3,274 contests across eight interdependent problem domains involving 8,985 observations. Results: Whereas a contestant’s related experience on the innovation contest platform is more positively associated with her focal contest performance compared with unrelated experience, the benefits of related experience arise only from downstream experience. That is, there are no significant performance benefits of upstream experience. Furthermore, the performance benefits of downstream experience are greater when the contest duration is shorter, highlighting its role in enabling more efficient search and problem solving in innovation contest platforms with interdependent problem domains. Managerial implications: Contrary to the notion of “hyperspecialization,” our findings suggest that contestants can reap benefits from diversifying their experience into downstream problem domains on innovation contest platforms. Furthermore, innovation contest platforms could facilitate such targeted diversification of contestant experience by developing more granular metrics of contestant experience across problem domains. Our findings also have implications for resource allocation and job rotation decisions in software development projects within firms.
As more businesses are turning to crowdsourcing platforms for solutions to business problems, determining how to manage the sourcing contests based on their objectives has become critically important. Aside from static design parameters, such as the reward, a lever organizations can use to dynamically steer contests toward desirable goals is the feedback offered to contestants during the contest. In this study, first, using the psychology literature on the theory of feedback intervention, we classify feedback into two types: outcome and process. Second, using data from almost 12,000 design contests, we empirically examine the effects of the two types of feedback on the convergence and diversity of submissions following feedback interventions. We find that process feedback, providing goal-oriented information to contestants, fosters convergent thinking, leading to submissions that are similar. Outcome feedback, on the other hand, encourages divergent thinking, producing a greater variety of solutions to a problem. Furthermore, the effects are strengthened when the feedback is provided earlier in the contest rather than later. Based on our findings, we offer insights on how practitioners can strategically use an appropriate form of feedback to either generate greater diversity of solutions or efficient convergence to an acceptable solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.