A recent theoretical treatment by Christiansen and Chater attempts to address fundamental challenges significant to language processing and evolution with one major operational constraint called the “Now-or-Never” bottleneck. The authors' “Chunk-and-Pass” processing putatively mitigates the severe multilevel Now-or-Never bottleneck via fast linguistic coding and compression, hierarchical language representation and pattern duality, and incrementally learned item-based predictions useful for grammaticalization over wide spacetime scales. Despite being a promising explanation of language processes, structure, and development, the Chunk-and-Pass model manages the Now-or-Never constraint with seeming reliance on optimal joint source-channel coding, a set of computational attributes for natural and artificial speech based on Shannon's noisy channel theorems. Restating the Now-or-Never bottleneck with information-theoretic source-channel capacity limitations stresses tradeoffs inherent in the authors' model involving multilevel lossy code-transmission rate and security. Such attributes render evolvable associative networks capable of Chunk-and-Pass speech acquisition, recognition, generation, and adaptation, suggesting Chunk-and-Pass processing represents a special case of joint source-channel coding.