2018
DOI: 10.3758/s13428-017-1012-5
|View full text |Cite
|
Sign up to set email alerts
|

TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition

Abstract: This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 22 publications
3
10
0
Order By: Relevance
“…There is, however, some evidence in favor of a more flexible phoneme‐order encoding from studies showing that words that can be generated by adding or deleting a phoneme in the target word also influence target word recognition (Dufour & Frauenfelder, ; Luce & Pisoni, ; Vitevitch & Luce, ; see also Allopenna et al, ; Connine, Blasko, & Titone, ). Evidence for such flexibility is in line with more recent accounts of spoken word recognition, such as the TISK model of Hannagan, Magnuson, and Grainger (; see You & Magnuson, , for a more recent implementation). Such flexibility is achieved in TISK via open diphone units that represent ordered sequences of contiguous and non‐contiguous phonemes.…”
Section: Introductionsupporting
confidence: 79%
“…There is, however, some evidence in favor of a more flexible phoneme‐order encoding from studies showing that words that can be generated by adding or deleting a phoneme in the target word also influence target word recognition (Dufour & Frauenfelder, ; Luce & Pisoni, ; Vitevitch & Luce, ; see also Allopenna et al, ; Connine, Blasko, & Titone, ). Evidence for such flexibility is in line with more recent accounts of spoken word recognition, such as the TISK model of Hannagan, Magnuson, and Grainger (; see You & Magnuson, , for a more recent implementation). Such flexibility is achieved in TISK via open diphone units that represent ordered sequences of contiguous and non‐contiguous phonemes.…”
Section: Introductionsupporting
confidence: 79%
“…the rime) contribute to spoken word recognition. Note that the observation that the rime contributes to pre-lexical processing is fully compatible with a recent model of spoken word recognition, the TISK model (Hannagan et al, 2013;You & Magnuson, 2018), which incorporates a set of diphone units that code the order of phonemes, and a set of positionindependent phoneme units. Although there is recent evidence for the existence of positionindependent phoneme units coming from studies showing transposed-phoneme effects at least when consonants are transposed (e.g.…”
Section: Discussionsupporting
confidence: 53%
“…Decades after the discovery of the lack of invariance problem —the absence of invariant cues to speech sounds (e.g., Joos, 1948; Liberman et al, 1952; Peterson & Barney, 1952)—speech science offers limited explanations of how humans achieve phonetic constancy despite the many‐to‐many mapping between acoustics and percepts. Computational models of HSR have provided little insight, since most current models sidestep the vagaries of the signal and use idealized, abstract elements such as phonetic features (McClelland & Elman, 1986), phonemes (Hannagan, Magnuson, & Grainger, 2013; You & Magnuson, 2018), or human phoneme confusion probabilities (Norris & McQueen, 2008) rather than real speech as input. Such assumptions can ultimately complicate rather than simplify problems (Magnuson, 2008), as the details they bypass may contain constraints essential to the mechanisms underlying human performance.…”
Section: Discussionmentioning
confidence: 99%