With the growing reliance on Stated Choice (SC) data, researchers are increasingly interested in understanding how respondents process the information presented to them in such surveys. Specifically, it has been argued that some respondents may simplify the choice tasks by consistently ignoring one or more of the attributes describing the alternatives, and direct questions put to respondents after the completion of SC surveys support this hypothesis. However, in the general context of issues with response quality in SC data, there are certainly grounds for questioning the reliability of stated attribute processing strategies. In this paper, we take a different approach by attempting to infer attribute processing strategies through the analysis of respondent-specific coefficient distributions obtained through conditioning on observed choices. Our results suggest that a share of respondents do indeed ignore a subset of explanatory variables. However, there is also some evidence that the inferred attribute processing strategies are not necessarily consistent with the stated attribute processing strategies. Additionally, there is some evidence that respondents who claim to have ignored a certain attribute may simply have assigned it lesser importance. The results produced by the inferring approach not only lead to slightly better fit but also more consistent results.
Random coefficient models such as mixed logit are increasingly being used to allow for random heterogeneity in willingness to pay (WTP) measures. In the most commonly used specifications, the distribution of WTP for an attribute is derived from the distribution of the ratio of individual coefficients. Since the cost coefficient enters the denominator, its distribution plays a major role in the distribution of the WTP. Depending on the choice of distribution for the cost coefficient, and its implied range, the distribution of the WTP may or may not have finite moments. In this paper, we identify a criterion to determine whether, with a given distribution for the cost coefficient, the distribution of WTP has finite moments. Using this criterion, we show that some popular distributions used for the cost coefficient in random coefficient models, including normal, truncated normal, uniform and triangular, imply infinite moments for the distribution of WTP, even if truncated or bounded at zero. We also point out that relying on simulation approaches to obtain moments of WTP from the estimated distribution of the cost and attribute coefficients can mask the issue by giving finite moments when the true ones are infinite.
Quasi-random number sequences have been used extensively for many years in the simulation of integrals that do not have a closed-form expression, such as Mixed Logit and Multinomial Probit choice probabilities. Halton sequences are one example of such quasi-random number sequences, and various types of Halton sequences, including standard, scrambled, and shuffled versions, have been proposed and tested in the context of travel demand modeling. In this paper, we propose an alternative to Halton sequences, based on an adapted version of Latin Hypercube Sampling. These alternative sequences, like scrambled and shuffled Halton sequences, avoid the undesirable correlation patterns that arise in standard Halton sequences. However, they are easier to create than scrambled or shuffled Halton sequences. They also provide more uniform coverage in each dimension than any of the Halton sequences. A detailed analysis, using a sixteendimensional Mixed Logit model for choice between alternative-fuelled vehicles in California, was conducted to compare the performance of the different types of draws. The analysis shows that, in this application, the Modified Latin Hypercube Sampling (MLHS) outperforms each type of Halton sequence. This greater accuracy combined with the greater simplicity make the MLHS method an appealing approach for simulation of travel demand models and simulation-based models in general.
Article AbstractIn this paper, we discuss some of the issues that arise with the computation of the implied value of travel-time savings in the case of discrete choice models allowing for random taste heterogeneity. We specifically look at the case of models producing a non-zero probability of positive travel-time coefficients, and discuss the consistency of such estimates with theories of rational economic behaviour. We then describe how the presence of unobserved travel-experience attributes or conjoint activities can bias the estimation of the travel-time coefficient, and can lead to false conclusions with regards to the existence of negative valuations of travel-time savings. We note that while it is important not to interpret such estimates as travel-time coefficients per se, it is nevertheless similarly important to allow such effects to manifest themselves; as such, the use of distributions with fixed bounds is inappropriate. On the other hand, the use of unbounded distributions can lead to further problems, as their shape (especially in the case of symmetrical distributions) can falsely imply the presence of positive estimates. We note that a preferable solution is to use bounded distributions where the bounds are estimated from the data during model calibration. This allows for the effects of data impurities or model misspecifications to manifest themselves, while reducing the risk of bias as a result of the shape of the distribution. To conclude, a brief application is conducted to support the theoretical claims made in the paper.
The community of choice modellers has expanded substantially over recent years, covering many disciplines and encompassing users with very different levels of econometric and computational skills. This paper presents an introduction to Apollo, a powerful new freeware package for R that aims to provide a comprehensive set of modelling tools for both new and experienced users. Apollo also incorporates numerous post-estimation tools, allows for both classical and Bayesian estimation, and permits advanced users to develop their own routines for new model structures.
ReuseUnless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version -refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher's website. TakedownIf you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. AbstractThere is growing interest in the use of models that recognise the role of individuals' attitudes and perceptions in choice behaviour. Rather than relying on simple linear approaches or a potentially bias-inducing deterministic approach based on incorporating stated attitudinal indicators directly in the choice model, researchers have recently recognised the latent nature of attitudes. The uptake of such latent attitude models in applied work has however been slow, while a number of overly simplistic assumptions are also commonly made. In this paper, we present an application of jointly estimated attitudinal and choice models to a real world transport study, looking at the role of latent attitudes in a rail travel context. Our results show the impact that concern with privacy, liberty and security, and distrust of business, technology and authority have on the desire for rail travel in the face of increased security measures, as well as for universal security checks. Alongside demonstrating the applicability of the model in applied work, we also address a number of theoretical issues. We first show the equivalence of two different normalisations discussed in the literature. Unlike many other latent attitude studies, we explicitly recognise the repeated choice nature of the data. Finally, the main methodological contribution comes in replacing the typically used continuous model for attitudinal response by an ordered logit structure which more correctly accounts for the ordinal nature of the indicators.
This paper examines sources of correlation among utility coefficients in models allowing for random heterogeneity, including correlation that is induced by random scale heterogeneity. We distinguish the capabilities and limitations of various models, including mixed logit, generalized multinomial logit (G-MNL), latent class, and scale-adjusted latent class. We demonstrate that (i) mixed logit allows for all forms of correlation, including scale heterogeneity, (ii) G-MNL is a restricted form of mixed logit that, with an appropriate implementation, can allow for scale heterogeneity but (in its typical form) not other sources of correlation, (iii) none of the models disentangles scale heterogeneity from other sources of correlation, and (iv) models that assume that the only source of correlation is scale heterogeneity necessarily capture, in the estimated scale parameter, whatever other sources of correlation exist.
The study of respondent heterogeneity is one of the main areas of research in the eld of choice modelling. The general emphasis is on variations across respondents in relative taste parameters while maintaining the assumption of homogeneous utility maximising decision rules. While recent work has allowed for di erences in the utility speci cation across respondents in the context of looking at heterogeneous information processing strategies, the underlying assumption that all respondents employ the same choice paradigm remains. This is despite evidence in the literature that di erent paradigms work di erently well on given datasets. In this paper, we argue that such di erences may in fact extend to respondents within a single dataset. We accommodate these differences in a latent class model, where individual classes make use of di erent underlying paradigms.We present four applications using three di erent datasets, showing mixtures between standard" random utility maximisation models and lexicography based models, models with multiple reference points, elimination by aspects models and random regret minimisation models. In each of the case studies, the behavioural mixing model obtains signi cant gains in t over the base structure where all respondents are hypothesised to use the same rule. The ndings o er important further insights into the behavioural patterns of respondents. There is also evidence that what is retrieved as taste heterogeneity in standard models may in fact be heterogeneity in decision rules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.