This article discusses the pitfalls and opportunities of AI in marketing through the lenses of knowledge creation and knowledge transfer. First, we discuss the notion of "higher-order learning" that distinguishes AI applications from traditional modeling approaches, and while focusing on recent advances in deep neural networks, we cover its underlying methodologies (multilayer perceptron, convolutional, and recurrent neural networks) and learning paradigms (supervised, unsupervised, and reinforcement learning). Second, we discuss the technological pitfalls and dangers marketing managers need to be aware of when implementing AI in their organizations, including the concepts of badly defined objective functions, unsafe or unrealistic learning environments, biased AI, explainable AI, and controllable AI. Third, AI will have a deep impact on predictive tasks that can be automated and require little explainability, we predict that AI will fall short of its promises in many marketing domains if we do not solve the challenges of tacit knowledge transfer between AI models and marketing organizations.
The strength of bargainers' preferences for fair settlements has important implications for predicting negotiation outcomes and guiding bargaining strategy. Existing literature reports a few calibration exercises for social utility models, but the predictive accuracy of these models for out-of-sample forecasting remains unknown. Therefore, we investigate whether fairness considerations are stable enough across bargaining situations to be quantified and used to forecast bargaining behavior accurately. We develop a model that embeds a preference for fair treatment in a quantal response framework to account for noise and experience. In addition, we estimate preference for fairness (willingness to pay) using the simplest, one-round version of sequential bargaining games and then employ it to perform out-of-sample forecasts of multiple-round games of various lengths, discount factors, pie sizes, and levels of bargainer experience. Except in circumstances in which the bargaining pie is very small, the fitted model has significant and substantial out-of-sample explanatory power. The stability we find implies that the model and techniques might ultimately be extended to estimates of the influence of fairness on field negotiations, as well as across subpopulations.games, group decisions, bargaining, utility-preference, estimation
M odel-based decision support systems (DSS) improve performance in many contexts that are data-rich, uncertain, and require repetitive decisions. But such DSS are often not designed to help users understand and internalize the underlying factors driving DSS recommendations. Users then feel uncertain about DSS recommendations, leading them to possibly avoid using the system. We argue that a DSS must be designed to induce an alignment of a decision maker's mental model with the decision model embedded in the DSS. Such an alignment requires effort from the decision maker and guidance from the DSS. We experimentally evaluate two DSS design characteristics that facilitate such alignment: (i) feedback on the upside potential for performance improvement and (ii) feedback on corrective actions to improve decisions. We show that, in tandem, these two types of DSS feedback induce decision makers to align their mental models with the decision model, a process we call deep learning, whereas individually these two types of feedback have little effect on deep learning. We also show that deep learning, in turn, improves user evaluations of the DSS. We discuss how our findings could lead to DSS design improvements and better returns on DSS investments.
We examine the influence of appeal scales on the likelihood and magnitude of donation in a large field experiment. We argue and show that the leftmost anchor on the appeal scale most strongly influences the likelihood of donating; the lower the anchor, the higher the donation likelihood. Furthermore, our findings indicate that increasing the steepness of the amounts on the appeal scale increases the magnitude of donations. Both effects are stronger for infrequent than for frequent donors. Our results demonstrate that by using what a charity knows about past donor behavior, it can alter appeal scales to change donation behavior.
In their purchase decisions, online customers seek to improve decision quality while limiting search efforts. In practice, many merchants have understood the importance of helping customers in the decision-making process and provide online decision aids to their visitors. In this paper, we show how preference models which are common in conjoint analysis can be leveraged to design a questionnaire-based decision aid that elicits customers' preferences based on simple demographics, product usage, and self-reported preference questions. Such a system can offer relevant recommendations quickly and with minimal customer input. We compare three algorithmscluster classification, Bayesian treed regression, and stepwise componential regression-to develop an optimal sequence of questions and predict online visitors' preferences. In an empirical study, stepwise componential regression, relying on many fewer and easier-to-answer questions, achieved predictive accuracy equivalent to a traditional conjoint approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.