IntroductionAs online surveys continue to capture the attention of institutional researchers, several questions about this new medium of data collection invariably surface, especially when online instruments are compared to traditional paper instruments. First is the issue of response rates. Do online surveys yield higher rates of response than do paper surveys?By which method can institutional researchers collect the most data? Second is the issue of nonresponse bias, or differences between survey respondents and nonrespondents (demographically, attitudinally, or otherwise). Is the nonresponse bias characteristic of online surveys similar to or different from that of paper surveys? Do online surveys steer data collection toward new (and possibly less skewed) respondent pools, or do they reproduce the respondent bias found in paper surveys? Still a third issue is response bias.That is, are there differences between online survey responses and paper survey responses, despite identical survey items? Close analysis of response bias is particularly critical when surveys are distributed as paper and electronic forms within a single administration, and clarifies further the methodological implications of data collection via the Internet.With these issues in mind, the present study is designed to examine response rates, nonresponse bias, and response bias across two groups of community college students: those who received a district-wide follow-up survey of their college experiences via email, and those who received this survey by standard mail. The results of this study not only paint a clearer picture of differences and similarities between online surveys and paper surveys, but also inform efforts to equate online survey data with paper survey data in a single, mixed-mode administration. Further, by focusing this study on community 2 college students, we stand to learn more about a group of students who are notoriously difficult to locate and who historically have had lower-than-average survey participation rates.