The letter by Bodenstein (1) takes issue with several semantic distinctions and the value of measuring expert agreement and credentials in our recent study (2). Bodenstein's (1) comments suggest a disagreement with the mere practice of analyzing expert perspectives and credentials. He raises many speculative points without offering data, which we have addressed elsewhere (3). We stand by the analysis presented in our study.Bodenstein (1) suggests that our study takes an ad hominem approach of "truth by majority rule" that bypasses the merits of evaluating the scientific data. This misunderstands our study's framing and stands in direct contrast to two prominent conclusions in the paper. We concluded that our results "suggest a strong role for considering expert credibility in the relative weight of and attention to these groups of researchers in future discussions in media, policy, and public forums" (2). Importantly, our paper did not claim to be proving any scientific truth by counting scientists. On the contrary, we stated that the distribution of experts and their credentials has been a hitherto underconsidered element in the broader climate change discourse, which can lead to media bias (4). Mistakenly, Bodenstein (1) claims that we implied that minority viewpoints should be ignored and that our study tarred individuals with group metrics. These two comments disregard the above statement where we did not suggest ignoring minority viewpoints but instead, suggested that the relative weight and credentials of viewpoints should be presented along with the viewpoint as contextual information. Furthermore, we stated explicitly: "Ultimately, of course, scientific confidence is earned by the winnowing process of peer review and replication of studies over time. In the meanwhile, given the immediacy attendant to the state of debate over perception of climate science, we must seek estimates while confidence builds" (2). This risk management framework of synthesizing expert perception and agreement clearly did not preclude, but instead, complemented and in fact, relied on, direct evaluation of the scientific data (3).Bodenstein (1) suggests that we elided or obscured the distinction between our metrics and the construct that they represented and speculates without evidence on the deficiencies of publication records as metrics. As Bodenstein acknowledges (1), however, we clearly presented our framework, precisely defined our metrics, and justified their use as metrics. Rather than eliding this distinction, we highlighted it. Bodenstein (1) then reiterates the self-evident argument that predominating paradigms can be proven wrong and the unsubstantiated speculation that group think rather than data could drive citation/publication patterns, which we have addressed elsewhere (3,5,6).Though not present here (1), scientific discourse is aided by substantive comments on studies such as ours. We have engaged with such comments in a variety of venues (5, 6) and believe our engagement has further strengthened our thinking and ...