Verbal responses are a convenient and naturalistic way for participants to provide data in psychological experiments (Salzinger, 1959). However, audio recordings of verbal responses typically require additional processing such as transcribing the recordings into text, as compared with other behavioral response modalities (e.g. typed responses, button presses, etc.). Further, the transcription process is often tedious and time-intensive, requiring human listeners to manually examine each moment of recorded speech. Here we evaluate the performance of a state-of-the-art speech recognition algorithm (Halpern et al., 2016) in transcribing audio data into text during a list-learning experiment. We compare the computer-generated transcripts to transcripts made by human annotators. Both sets of transcripts matched to a high degree and exhibited similar statistical properties, in terms of the participants' recall performance and recall dynamics that the transcripts captured. This proof-of-concept study suggests that speech-to-text engines could provide a cheap, reliable, and rapid means of automatically transcribing speech data in psychological experiments. Further, our findings open the door for verbal response experiments that scale to thousands of participants (e.g. administered online), as well as a new generation of experiments that decode speech on-the-fly and adapt experimental parameters based on participants' prior responses.