As colleges and universities continue their commitment to increasing access to higher education through offering education online and at scale, attention on teaching open-ended subjects online and at scale, mainly the arts, humanities, and the social sciences, remains limited. While existing work in scaling open-ended courses primarily focuses on the evaluation and feedback of open-ended assignments, there is a lack of understanding of how to effectively teach open-ended, university-level courses at scale. To better understand the needs of teaching large-scale, open-ended courses online effectively in a university setting, we conducted a mixed-methods study with university instructors and students, using surveys and interviews, and identified five critical pedagogical elements that distinguish the teaching and learning experiences in an open-ended course from that in a non-open-ended course. An overarching theme for the five elements was the need to support students' self-expression. We further uncovered open challenges and opportunities when incorporating the five critical pedagogical elements into large-scale, open-ended courses online in a university setting, and suggested six future research directions: (1) facilitate in-depth conversations, (2) create a studio-friendly environment, (3) adapt to open-ended assessment, (4) scale individual open-ended feedback, (5) establish trust for self-expression, and (6) personalize instruction and harness the benefits of student diversity.
Surveys are a common instrument to gauge self-reported opinions from the crowd for scholars in the CSCW community, the social sciences, and many other research areas. Researchers often use surveys to prioritize a subset of given options when there are resource constraints. Over the past century, researchers have developed a wide range of surveying techniques, including one of the most popular instruments, the Likert ordinal scale, to elicit individual preferences. However, the challenge to elicit accurate and rich self-reported responses with surveys in a resource-constrained context still persists today. In this study, we examine Quadratic Voting (QV), a voting mechanism powered by the affordances of a modern computer and straddles ratings and rankings approaches, as an alternative online survey technique. We argue that QV could elicit more accurate self-reported responses compared to the Likert scale when the goal is to understand relative preferences under resource constraints. We conducted two randomized controlled experiments on Amazon Mechanical Turk, one in the context of public opinion polling and the other in a human-computer interaction user study. Based on our Bayesian analysis results, a QV survey with a sufficient amount of voice credits, aligned significantly closer to participants' incentive-compatible behaviors than a Likert scale survey, with a medium to high effect size. In addition, we extended QV's application scenario from typical public policy and education research to a problem setting familiar to the CSCW community: a prototypical HCI user study. Our experiment results, QV survey design, and QV interface serve as a stepping stone for CSCW researchers to further explore this surveying methodology in their studies and encourage decision-makers from other communities to consider QV as a promising alternative.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.