This paper outlines specific methodologies for conducting research via computer networks. We discuss advantages of Internet experimentation over previous modes of telecommunicationfacilitated research and characterize features of studies that can benefit from Internet access and those which are unlikely to. Wepoint out pitfalls and suggest a range of potential solutions in terms of specific practical techniques for managing the design, dissemination, and collection of Internet materials. Wealso discuss techniques for minimizingattrition and for adapting to recalcitrance presented by "hacker" vandalism. The Internet and its sisters are qualitatively different from the other forms of electronic connectivity (telegraph, wireless radio, television, telephone) realized in this century and the last. The global computer-mediated connectivity supplied by the Internet is about as accessible as radio and television, but it is interactive in a way that radio, television, and even telephones are not. This paper outlines specific methodologies for conducting research via the Internet and suggests advantages over previous modes of telecommunication-facilitated research. We point out pitfalls and suggest a range of potential solutions. The "Infobahn" links academic, government, military, commercial, and private interests primarily via networked computer systems. Academic links mobilize potential subject pools into categories roughly sorted by age and academic level (grammar and secondary schools have connections in addition to undergraduate and graduate institutes). Moreover, commercial Internet access providers are proliferating. A January 1996 survey (Lottor, 1996) indicates 9.472 million hosts, with 1.79 million in u.s. academic domains and 2.43 million in commercial domains. Private access servers like CompuServe are in the commercial domain and offer access internationally.
To address concerns raised regarding the use of online course‐based summative assessment methods, a quasi‐experimental design was implemented in which students who completed a summative assessment either online or offline were compared on performance scores when using their self‐reported preferred or non‐preferred modes. Performance scores were found not to differ depending on whether the assessment was completed in the preferred or non‐preferred mode. These findings provide preliminary support for the validity of online assessment methods. Future studies could help determine the extent to which this finding generalizes beyond the assessment procedures and type of sample used here. Suggestions for follow‐up studies are offered, including exploring the validity of more complex computer‐related online assessment tasks and investigating the impact of using preferred and non‐preferred modes upon the quality of the student experience.
Recent interest in and use of online assessments across a range of disciplines has raised a number of issues relating to student perceptions and performance when using this medium. The validity of such online tests is a crucial consideration, especially if they are to be used for summative as well as formative assessment. The current study examined higher education students' performance on online assessments which they were required to take as part of an undergraduate psychology course. The marks obtained by students required to take the same multiple choice question (MCQ) assessment online and offline (in pen-and-paper format) were compared, and relationships between performance and computer anxiety and computer engagement measures were explored. The results indicate minimal influence of assessment modality, computer anxiety and computer engagement on MCQ test scores, with only very small and non significant effect sizes being observed overall. No evidence of any significant relationships between gender and age and computer attitudes was observed. To conclude, the results provide promising initial support for the use of online summative assessments in contexts similar to the one used here, allaying some concerns about disadvantaging certain groups of students. However, further research is needed to explore possible performance differences across different contexts, assessment types, and student cohorts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.