In this paper, we review and apply several approaches to model selection for analysis of variance models which are used in a credibility and insurance context. The reversible jump algorithm is employed for model selection, where posterior model probabilities are computed. We then apply this method to insurance data from workers' compensation insurance schemes. The reversible jump results are compared with the Deviance Information Criterion, and are shown to be consistent.For a general review of Bayesian modelling averaging, see Clyde (1999), andHoeting et al. (1999). However, when the set of candidate models M is not exhaustive, we might not be able to average over all possible models. In that context, placing a prior distribution on M does not apply, and since we are interested only in predicting future unknown values, this might be more appropriate than selecting a single model. The second alternative is the so called M -completed view, which simply seeks to compare a set of models which are available at that time. In this case M = {M i } simply constitute a range of specified models to be compared. From this perspective, assigning the probabilities {P(M i ), M i ∈ M } does not make sense and the actual overall model specifies beliefs for R of the form p(R) = p(R|M t ). Typically, {M i } will 2 have been proposed largely because they are attractive from the point of view of tractability of analysis or communication of results, compared with the actual belief model M t . The third alternative is the M -open view. In an M -open system it is assumed that none of the models being considered is the true model which generated the observations. In this case, our goal is to select some model or subset of models which best describe the data. For the M -completed and M -open views, assigning prior probabilities on the model space M is inappropriate since statements like p(M k ) = c do not make sense. However, in the M -open case, there is not separate overall belief specification.3 Decision Theoretic Approach Key et al. (1999) argue that any criteria for model comparison should depend on the decision context in which the comparison is taking place, as well as the perspective from which the models are viewed. In particular, an appropriate utility structure is required, making explicit those aspects of the performance of the model that is most important. Using a decision theoretic approach, we can assign utilities to the choice of model M i , u(M i , γ), where γ is some unknown of interest. The general decision problem is then to choose the optimal model, M * , by maximising expected utilitieswith π(γ|R) representing actual beliefs about γ after observing R in Equation (1). Spiegelhalter et al. (2002) propose their deviance information criterion, DIC, as an alternative to Bayes' factors. In Spiegelhalter et al. (2002), the DIC is developed to address how well the posterior might predict future data generated by the same mechanism that gave rise to the observed data. Our motivation is that likelihood ratio tests cannot be u...