We tested whether the unequal-variance signaldetection (UVSD) and dual-process signal-detection (DPSD) models of recognition memory mimic the behavior of each other when applied to individual data. Replicating previous results, there was no mimicry for an analysis that fit each individual, summed the goodness-of-fit values over individuals, and compared the two sums (i.e., a single model selection). However, when the models were compared separately for each individual (i.e., multiple model selections), mimicry was substantial. To quantify the diagnosticity of the individual data, we used mimicry to calculate the probability of making a model selection error for each individual. For nondiagnostic data (high model selection error), the results were compatible with equalvariance signal-detection theory. Although neither model was justified in this situation, a forced-choice between the UVSD and DPSD models favored the DPSD model for being less flexible. For diagnostic data (low model selection error), the UVSD model was selected more often.Keywords Model mimicry . Model flexibility . Recognition memory . Unequal-variance signal-detection model . Dual-process signal-detection model When comparing models based on goodness of fit (GOF), model flexibility (or complexity) is an important issue to address. Model flexibility refers to the ability of a model to flexibly capture any data pattern (Myung, 2000). A useful concept for understanding model flexibility is the response surface methodology (Bates & Watts, 1988), which is a plot of all possible results that a model can explain (an area in the data space). A more flexible model will cover a larger proportion of the data space, which indicates that it is difficult to find data to reject that model, as compared to all possible alternative models. However, sometimes researchers want to directly compare two leading candidate models. The relevant consideration is the area of overlap between the two models (the region in which they mimic each other), as compared to the area in the data space that is unique to each model, and in this case, model mimicry is an important consideration (Wagenmakers, Ratcliff, Gomez, & Iverson, 2004). In contrast to absolute flexibility, model mimicry can be thought of as a measure of relative flexibility when comparing two particular models.Another important issue to consider when comparing two models is whether to analyze data at the individual or the group level. Cohen, Sanborn, and Shiffrin (2008) conducted model mimicry simulations fitting both individual and group data in order to determine which method was more effective in recovering the true underlying model (see also Cohen, Rotello, & Macmillan, 2008). They found that model selection based on the sum of GOF values was more accurate when the data of each individual were fit separately, provided that there were a sufficient number of data for each individual; otherwise, model selection was superior when based on group data. The experiments we analyzed collected at least 140 obse...