“…However, because the number of trials in experiment 1 was not sufficient for the non-musicians to achieve an optimal level of performance, a second experiment was performed, which involved protracted training in eight additional nonmusician listeners, using one of the stimulus conditions from experiment 1. Although several earlier studies have documented long-term training effects in frequency discrimination in non-musicians (Amitay et al, 2005;AriEven Roth et al, 2003, 2004Campbell and Small, 1963;Delhommeau et al, 2002Delhommeau et al, , 2005Demany, 1985;Demany and Semal, 2002;Grimault et al, 2002Grimault et al, , 2003Irvine et al, 2000;Wright and Fitzgerald, 2005), we reasoned that documenting learning effects in non-musicians using the same stimuli and test procedure as in one of the conditions 1 The notion that the listeners' initial lack of familiarity with the procedure and/or stimuli could lead to an under-estimation of the actual difference in sensory discrimination abilities between musicians and nonmusicians can be understood in terms of a signal-detection-theoretic model (Green and Swets, 1966) wherein frequency discrimination performance is limited by two types of additive sources of internal noise: ''sensory'' noise, which imposes an absolute upper limit on frequency discrimination abilities and is smaller in musicians than in non-musicians, and ''cognitive'' noise, which reflects the listeners' lack of familiarity with the specifics of the procedure and stimuli, and is the same for musicians and non-musicians. Under this model, the mean frequency discrimination threshold of the musicians can be expressed as h m / ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 2 m þ c 2 p , and that of the non-musicians as h n / ffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 2 n þ c 2 p , where s 2 and c 2 denote the variance of the sensory and cognitive noises, respectively, the subscripts m and n refer to musicians and non-musicians, respectively, and µ is used to indicate a proportionality relationship.…”