Voice artificial intelligence (AI) technology becomes increasingly common in everyday life, for example, in automated phone services, voice assistive systems (e.g., Siri), and social chat bots. However, most research has focused on how younger adults perceive modern AI speech, leaving the development of this technology age-uninformed. Recent work indicates that older adults are less able to identify modern AI speech compared to younger adults, but the underlying causes are unclear. The current study with younger (N=133; 22-39 years) and older adults (N=146; 54-79 years) investigated potential factors that could explain the age-related reduction in AI speech identification. In Experiment 1, we investigated whether high-frequency information in speech – to which older adults have less access due to hearing loss – contributes to age-group differences, but our results showed that older adults were less able to identify AI speech for both full-bandwidth speech and speech for which information above 4 kHz was removed. This result makes the contribution of hearing loss less likely. In Experiment 2, we investigated whether the known age-related reduction in the ability to process prosodic information in speech predicts the reduction in AI speech identification. Indeed, the ability to identify AI speech was greater in individuals who also showed a greater ability to identify emotions from prosodic speech information, after accounting for hearing function and self-rated experience with voice AI systems. The current results suggest that the ability to identify AI speech is related to the accurate processing of prosodic information.