I n this paper we distinguish between two types of white lies: those that help others at the expense of the person telling the lie, which we term altruistic white lies, and those that help both others and the liar, which we term Pareto white lies. We find that a large fraction of participants are reluctant to tell even a Pareto white lie, demonstrating a pure lie aversion independent of any social preferences for outcomes. In contrast, a nonnegligible fraction of participants are willing to tell an altruistic white lie that hurts them a bit but significantly helps others. Comparing white lies to those where lying increases the liar's payoff at the expense of another reveals important insights into the interaction of incentives, lying aversion, and preferences for payoff distributions. Finally, in line with previous findings, women are less likely to lie when it is costly to the other side. Interestingly though, we find that women are more likely to tell an altruistic lie.
O rganizations increasingly seek solutions to their open-ended design problems by employing a contest approach in which search over a solution space is delegated to outside agents. We study this new class of problems, which are costly to specify, pose credibility issues for the focal firm, and require finely tuned awards for meeting the firm's needs. Through an analytical model, we examine the relationship between problem specification, award structure, and breadth of solution space searched by outside agents toward characterizing how a firm should effectively manage such open-ended design contests. Our results independently establish and offer a causal explanation for an interesting phenomenon observed in design contests-clustering of searchers in specific regions of the solution space. The analysis also yields a cautionary finding-although the breadth of search increases with number of searchers, the relationship is strongly sublinear (logarithmic). Finally, from the practical perspective of managing the delegated search process, our results offer rules of thumb on how many and what size awards should be offered, as well as the extent to which firms should undertake problem specification, contingent on the nature (open-endedness and uncertainty) of the design problem solution being delegated to outside agents.
Past research in new product development (NPD) has conceptualized prototyping as a "design-build-test-analyze" cycle to emphasize the importance of the analysis of test results in guiding the decisions made during the experimentation process. New product designs often involve complex architectures and incorporate numerous components, and this makes the ex ante assessment of their performance difficult. Still, design teams often learn from test outcomes during iterative test cycles enabling them to infer valuable information about the performances of (as yet) untested designs. We conceptualize the extent of useful learning from analysis of a test outcome as depending on two key structural characteristics of the design space, namely whether the set of designs are "close" to each other (i.e., the designs are similar on an attribute level) and whether the design attributes exhibit nontrivial interactions (i.e., the performance function is complex). This study explicitly considers the design space structure and the resulting correlations among design performances, and examines their implications for learning. We derive the optimal dynamic testing policy, and we analyze its qualitative properties. Our results suggest optimal continuation only when the previous test outcomes lie between two thresholds. Outcomes below the lower threshold indicate an overall low performing design space and, consequently, continued testing is suboptimal. Test outcomes above the upper threshold, on the other hand, merit termination because they signal to the design team that the likelihood of obtaining a design with a still higher performance (given the experimentation cost) is low. We find that accounting for the design space structure splits the experimentation process into two phases: the initial exploration phase, in which the design team focuses on obtaining information about the design space, and the subsequent exploitation phase in which the design team, given their understanding of the design space, focuses on obtaining a "good enough" configuration. Our analysis also provides useful contingency-based guidelines for managerial action as information gets revealed through the testing cycle. Finally, we extend the optimal policy to account for design spaces that contain distinct design subclasses.sequential testing, design space, complexity, contingency analysis
Motivated by several examples from industry, such as the introduction of a biotechnology-based process innovation in nylon manufacturing, we consider a technology provider that develops and introduces innovations to a market of industrial customers--original equipment manufacturers (OEMs). The technology employed by these OEMs determines the performance quality of the end product they manufacture, which in turn forms the basis of competition among them. Within this context of downstream competition, we examine the technology provider's introduction strategies when improving technologies are introduced sequentially. We develop a two-period game-theoretic framework to account for the strategic considerations of the parties involved (i.e., the technology provider and the OEMs). Our main result indicates that the technology provider may find it beneficial to induce partial adoption of the new technology, depending on the technological progress the provider intends to offer in the future. We analyze many technology-specific and market-related characteristics--such as volume-based pricing for new component technologies, upgrade prices, and OEMs with differing capabilities--that correspond to various business settings. Our key result (i.e., partial adoption) proves to be a robust phenomenon. We also develop additional insights regarding the interactions between adoption and OEM capabilities.technology introduction, technology adoption, game theory, industrial markets, industrial customers, business-to-business, multistage game
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.