One way to determine training needs and to evaluate learning is to measure how trainees organize knowledge using a card sorting task. While card sorting is a valid tool for assessing knowledge organization, it can be work intensive and error-prone when it is manually administered. For this reason, we developed a software tool that computerizes the card sort task. We present a study that was conducted to determine whether the computerized version of card sorting is comparable to the manual sort. One-hundred eight participants completed two card sorts, either two manual sorts, one manual and one computerized sort, or two computerized sorts. No differences were found between the administration methods with respect to card sort accuracy, test-retest scores, and number of piles created. Differences between the two methods were found in administration time and length of the pile labels. These differences disappeared after one computerized administration. We conclude that a computerized card sorting task is just as effective at eliciting knowledge as a manual card sort.
Despite the existence of an overabundance of research articles, reviews, and meta-analyses, there still appears to be disagreement regarding the feedback techniques that produce the most optimal learning conditions. The purpose of this research was to investigate two specific types of feedback, process and outcome, as well as the sequence in which these types of feedback should be presented as trainees learn to perform a simulated radar task. It was hypothesized that individuals receiving process feedback followed by outcome feedback would perform better on the simulated radar task than those receiving feedback in any other sequence. The results of this study indicate that individuals receiving feedback, regardless of the type and sequence, performed better at the end of training than those who did not receive feedback. No support was found for recommending a process-outcome feedback sequence.
As distance learning becomes more popular, many guidelines on designing web-based courses are being generated. The primary purpose of this study was to examine guidelines concerning the use of audio and text in web-based material. Materials were developed which tested different combinations of text (full versus bullets) and audio. Participants who took these different lessons on interoperability, a modeling and simulation concept, were tested with either a recognition or recall test. Three main findings are reported. First, the results of this study indicate that audio only or audio in combination with bullets resulted in higher test scores than text only, bullets only, and audio with text. Second, the results of this research also indicate that participants who were given a recognition test scored significantly higher than those who took a recall test. Finally, the results of this study also suggest that the information learned does not decay within a few hours of training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.