This study investigates the differences in children's comprehension and enjoyment of storybooks according to the medium of presentation. Two different storybooks were used and 132 children participated. Of these, 51 children read an extract from The Magicians of Caprona , about half reading an electronic version with an online dictionary, and the rest reading a printed version with a separate printed dictionary. The remaining 81 children read an extract from The Little Prince , 26 reading an electronic version, 26 reading the same but with narration and 29 reading a printed version. No dictionary was supplied with this storybook. The type of medium did not significantly affect the children's enjoyment of either storybook, and while it took them longer to read the electronic versions, this difference was only significant for The Little Prince . For both storybooks, comprehension scores were higher for retrievaltype questions than for inference ones. The use of the online dictionary in the electronic condition of The Magicians of Caprona was significantly greater than that for the printed dictionary in that condition. The provision of narration in the electronic version of The Little Prince led to significantly higher comprehension scores than when narration was absent. IntroductionTechniques to aid and improve children's reading skills and to motivate them towards further reading are always of interest to educationalists and to those involved in educational research. Thus, it is unsurprising that the increased availability of children's storybooks in electronic format should be an area of research interest.
The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed.
This paper presents a user-centred design and evaluation methodology for ensuring the usability of IR interfaces. The methodology is based on sequentially performing: a competitive analysis, user task analysis, heuristic evaluation, formative evaluation and a summative comparative evaluation. These techniques are described, and their application to iteratively design a prototype IR interface, which was then evaluated, is described. After each round of testing, the prototype was modified as needed. The usercentred methodology had a major impact in improving the interface. Results from the summative comparative evaluation suggest that users' performance improved significantly in our prototype interface compared with a similar competitive system. They were also more satisfied with the prototype design. This methodology provides a starting point for techniques that let IR researchers and practitioners design better IR interfaces that are both easy to learn to use and remember. The paper concludes with some principles of interface design for IR systems.
This paper reports on an empirical study of users' performance and satisfaction with the Web of Science interface. Two different search groups (novice and experienced) participated in the study. They carried out seven search tasks and their performance was recorded through transaction logging and computer screen recording. Data were captured on the time taken, search terms used, success score and error rates. After completion of search tasks, they completed a questionnaire on their satisfaction with the interface. The performance data showed that overall experienced users performed better than the novice group. Differences were significant in success score and error rates between the groups. Performance differences also existed on factors such as gender and previous online search training. Experienced female searchers performed best in terms of success score and error rates whereas the novice male group performed worst. Untrained users were more successful and made fewer errors than the trained group. Participants held neither highly positive nor highly negative perceptions about the Web of Science interface. Novice searchers were significantly more satisfied with the interface than the experienced group. Participants also noted both positive and negative features in the interface. This information could be used to redesign the present Web of Science interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.