This paper continues our investigation into the parallel computation of large-vocabulary speech recognition on a treestructured parallel processor. Having previously established results on parallel level-building with continuously variable Hidden Markov Models, we extend chis work to the case where the spoken sentences are constrained by a finite state grammar. The two key ideas are: 1) A pipelined sorting function on the processor array that efficiently transmits a sorted list of the best scores from all processors to the host. 2) A level pruning technique in which paths through the dynamic programming network are pruned only at ends of words.These ideas are evaluated on the BT-100 processor, a binary-tree parallel processor that was developed as pan of ATBrTs ASPEN project with DARPA. A performance model will be presented that estimates execufion time as a function of algorithm parameters. Real-the speech recognition has been achieved for a data-enuy task with a 70-word vocabulary and average branching factor of 23. and for an airline reservation task with a vocabulary of 132 words. The performance model predicts real-time execution of the 991 word DARPA Resource Management Task on a 127-processor machine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.