2012
DOI: 10.1177/0040571x12456666
|View full text |Cite
|
Sign up to set email alerts
|

Dan Hardy Memorial: Great St Mary’s, Cambridge, 2 February 2008

Abstract: We come together to honour, in the sight of God, the life and work of the Revd Professor Daniel Wayne Hardy -Dan to those who knew and loved him. And we do so in the context of worship, where we have heard from the word of God, and will participate in the Eucharist -nothing could be more appropriate to the person we remember. Not only was his whole life theo-centric -a pointing away from himself to God -but also his commitment to the life of the mind, indeed his epistemology, was shaped by doxology: he found a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The MAT was trained using the zrst [20], a python wrapper for the HTK toolkit [21] and srilm [22] that we developed for training unsupervised HMMs with varying model granularity. The LDA model we used in the Mutual Reinforcement was trained by MAL-LET [23].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The MAT was trained using the zrst [20], a python wrapper for the HTK toolkit [21] and srilm [22] that we developed for training unsupervised HMMs with varying model granularity. The LDA model we used in the Mutual Reinforcement was trained by MAL-LET [23].…”
Section: Methodsmentioning
confidence: 99%
“…The LDA model we used in the Mutual Reinforcement was trained by MAL-LET [23]. The MFCC were extracted using the HTK toolkit [21]. The i-vectors were extracted using Kaldi [24].…”
Section: Methodsmentioning
confidence: 99%
“…The two corpora used in the Zero Resource Speech Challenge were used here for easier comparison of results: the Buckeye corpus [37] (14137 utterances) in English and the NCHLT Xitsonga Speech corpus (4058 utterances) in Tsonga. The MAT was trained using the zrst [38], a python wrapper for the HTK toolkit [39] and srilm [40] for training unsupervised HMMs with varying model granularity. The ivectors were extracted using Kaldi [41].…”
Section: A Setup Of Mat-dnnmentioning
confidence: 99%