This paper presents a definitive description of neural network methodology and provides an evaluation of its advantages and disadvantages relative to statistical procedures. The development of this rich class of models was inspired by the neural architecture of the human brain. These models mathematically emulate the neurophysical structure and decision making of the human brain, and, from a statistical perspective, are closely related to generalized linear models. Artificial neural networks are, however, nonlinear and use a different estimation procedure (feed forward and back propagation) than is used in traditional statistical models (least squares or maximum likelihood). Additionally, neural network models do not require the same restrictive assumptions about the relationship between the independent variables and dependent variable(s). Consequently, these models have already been very successfully applied in many diverse disciplines, including biology, psychology, statistics, mathematics, business, insurance, and computer science. We propose that neural networks will prove to be a valuable tool for marketers concerned with predicting consumer choice. We will demonstrate that neural networks provide superior predictions regarding consumer decision processes. In the context of modeling consumer judgment and decision making, for example, neural network models can offer significant improvement over traditional statistical methods because of their ability to capture nonlinear relationships associated with the use of noncompensatory decision rules. Our analysis reveals that neural networks have great potential for improving model predictions in nonlinear decision contexts without sacrificing performance in linear decision contexts. This paper provides a detailed introduction to neural networks that is understandable to both the academic researcher and the practitioner. This exposition is intended to provide both the intuition and the rigorous mathematical models needed for successful applications. In particular, a step-by-step outline of how to use the models is provided along with a discussion of the strengths and weaknesses of the model. We also address the robustness of the neural network models and discuss how far wrong you might go using neural network models versus traditional statistical methods. Herein we report the results of two studies. The first is a numerical simulation comparing the ability of neural networks with discriminant analysis and logistic regression at predicting choices made by decision rules that vary in complexity. This includes simulations involving two noncompensatory decision rules and one compensatory decision rule that involves attribute thresholds. In particular, we test a variant of the satisficing rule used by Johnson et al. (Johnson, Eric J., Robert J. Meyer, Sanjoy Ghose. 1989. When choice models fail: Compensatory models in negatively correlated environments. (August) 255–270.) that sets a lower bound threshold on all attribute values and a “latitude of acceptance” model that s...
This paper presents an analytical approach, based on rank statistics, to the issue of comparing programs within Data Envelopment Analysis (DEA) efficiency evaluation framework. The program evaluation procedure distinguishes between managerial and programmatic inefficiency and uses the Mann-Whitney rank statistic to evaluate the statistical significance of the differences observed between a treatment program and its control group program after adjusting for differences in managerial efficiency between the programs. A numerical example, based on the data used to evaluate the educational enhancement of the Program Follow Through, is used to illustrate the proposed statistical procedures.Data Envelopment Analysis, ordinal rank statistics, efficiency comparisons
This article introduces to the statistical and insurance literature a mathematical technique for an a priori classification of objects when no training sample exists for which the exact correct group membership is known. The article also provides an example of the empirical application of the methodology to fraud detection for bodily injury claims in automobile insurance. With this technique, principal component analysis of RIDIT scores (PRIDIT), an insurance fraud detector can reduce uncertainty and increase the chances of targeting the appropriate claims so that an organization will be more likely to allocate investigative resources efficiently to uncover insurance fraud. In addition, other (exogenous) empirical models can be validated relative to the PRIDIT‐derived weights for optimal ranking of fraud/nonfraud claims and/or profiling. The technique at once gives measures of the individual fraud indicator variables’ worth and a measure of individual claim file suspicion level for the entire claim file that can be used to cogently direct further fraud investigation resources. Moreover, the technique does so at a lower cost than utilizing human insurance investigators, or insurance adjusters, but with similar outcomes. More generally, this technique is applicable to other commonly encountered managerial settings in which a large number of assignment decisions are made subjectively based on ‘‘clues,‘’ which may change dramatically over time. This article explores the application of these techniques to injury insurance claims for automobile bodily injury in detail.
Claims fraud is an increasingly vexing problem confronting the insurance industry. In this empirical study, we apply Kohonen's Self-Organizing Feature Map to classify automobile bodily injury (BI) claims by the degree of fraud suspicion. Feed forward neural networks and a back propagation algorithm are used to investigate the validity of the Feature Map approach. Comparative experiments illustrate the potential usefulness of the proposed methodology. We show that this technique performs better than both an insurance adjuster's fraud assessment and an insurance investigator's fraud assessment with respect to consistency and reliability.
This paper presents a class of utility functions (class A \infty ) that contains all of the utility functions commonly used for mathematical modeling: the class consisting of those utility functions whose derivatives alternate in sign. A simple representation via mixtures of exponential utilities is provided for this class which is both mathematically convenient and conducive to functional operations. A connection with Laplace transforms and the resultant implications for preference relations, aggregation and utility assessment are discussed.mixed exponential utilities, completely monotone utilities, utility function estimation, characterization of utilities
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.