Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel experiments at sound offset. Human adults practiced ∼1 h/d for 6-8 d on either asynchrony detection or order discrimination at sound offset with tones at 0.25 and 4.0 kHz. As at sound onset, learning on order-offset discrimination did not generalize to the other task (asynchrony), an untrained temporal position (onset), or untrained frequency pairs, indicating that this training affected a quite specialized neural circuit. In contrast, learning on asynchrony-offset detection generalized to the other task (order) and temporal position (onset), though not to untrained frequency pairs, implying that the training on this condition influenced a less specialized, or more interdependent, circuit. Finally, the learning patterns induced by single-session exposure to asynchrony and order tasks differed depending on whether these tasks were performed primarily at sound onset or offset, suggesting that this exposure modified circuitry specialized to separately process relative-timing tasks at these two temporal positions. Overall, it appears that the neural processes underlying relative-timing judgments are malleable, and that the nature of the affected circuitry depends on the duration of exposure (multihour or single-session) and the parameters of the judgment(s) made during that exposure.Accurately determining the temporal relationships between sounds is critical for normal auditory perception. Two auditory tasks that rely on such relative-timing judgments are asynchrony detection and order discrimination. In an asynchrony-detection task, the listener determines whether a sound's frequency components are synchronous or asynchronous, and in an orderdiscrimination task, the listener distinguishes the order of the component frequencies. Asynchrony judgments aid in the separation of sound sources (Bregman et al. 1994), while order judgments are used in the processing of speech ("mats" vs. "mast") and music (ascending vs. descending scales). In the present investigation, we use a behavioral perceptual-learning paradigm to gain insight into the neural circuitry underlying performance on auditory asynchrony and order tasks at sound offset, for comparison with the results of a previous examination of learning on the same tasks at sound onset.We previously reported that training on auditory relativetiming tasks at sound onset resulted in learning that did not generalize to any of a set of untrained conditions, suggesting that the underlying circuitry is highly specialized (Mossbridge et al. 2006). In that investigation, we trained one group of listeners on an asynchrony task (Fig. 1A) and one group on an order task (Fig. 1B), both at sound onset with the same frequency pair (0.25 and 4.0 kH...