Team formation tools assume instructors should configure the criteria for creating teams, precluding students from participating in a process that affects their learning experience. We propose LIFT, a novel learner-centered workflow where students propose, vote for, and weigh team formation criteria, and the collective results serve as inputs to the team formation algorithm. We conducted an experiment (N=289) comparing LIFT to the usual instructor-led process, and interviewed participants to evaluate their perceptions of LIFT and its outcomes. We found learners were capable of proposing novel criteria not part of existing algorithmic tools, like organizational style. Generally, learners avoided criteria frequently selected by instructors, including gender and GPA, and instead preferred those that promoted efficient collaboration. Second, LIFT led to team outcomes comparable to those achieved by the instructor-led approach, despite the differences in the configurations, and teams valued having control of the team formation process. We provide instructors and tool designers with a workflow and evidence supporting giving learners control of the algorithmic process used for grouping them into teams.
How can instructors group students into teams that interact and learn effectively together? One strand of research advocates for grouping students into teams with "good" compositions such as skill diversity. Another strand argues for deploying team-building activities to foster interpersonal relations like psychological safety. Our work synthesizes these two strands of research. We describe an experiment (N=249) that compares how team composition vs. team-building activities affect student team outcomes. In two university courses, we composed student teams either randomly or using a criteria-based team formation tool. Teams further performed team-building activities that promoted either team or task outcomes. We collected project scores, and used surveys to measure psychological safety, perceived performance, and team satisfaction. Surprisingly, the criteria-based teams did not statistically differ from the random teams on any of the measures taken, despite having compositions that better satisfied the criteria defined by the instructor. Our findings argue that, for instructors deploying a team formation tool, creating an expectation among team members that their team can perform well is as important as tuning the criteria in the tool. We also found that student teams reported high levels of psychological safety, but these levels appeared to develop organically and were not affected by the activities or compositional strategies tested. We distill these and other findings into implications for the design and deployment of team formation tools for learning environments.
Abstract-In recent years, the use of mobile phones and tablets for personal communication has increased dramatically, with over 1 billion smartphones out of a total of 5 billion mobile phones worldwide. The infrastructure and technology underlying these devices has improved to a level where it is now possible to integrate sensor technology directly and use them to acquire new data. Given the available resources and the number of technical challenges that have already been overcome, it would seem a natural progression to use mobile communication technology for field-based environmental monitoring. In this work, we review existing technology for acquiring, processing and reporting on environmental data in the field. The objective is to demonstrate whether or not it is possible to use off-the-shelf technology for environmental monitoring. We show several levels at which this challenge is being approached, and discuss examples of technology that have been produced.
Maintenance work orders (MWOs) are an integral part of themaintenance workflow. These documents allow techniciansto capture vital aspects of a maintenance job: observed symptoms,potential causes, solutions implemented, etc. TheseMWOs have often been disregarded during analysis becauseof the unstructured nature of the text they contain. However,many research efforts have recently emerged that cleanthese MWOs for analysis. One such effort uses a taggingmethod with an open source toolkit, named Nestor, which relieson experts classifying and annotating the words used inthe MWOs. For example, an expert might classify the words“replace,” “replaced,” and “repalce” as “Solutions” and givethe alias “replace” to all of them. This method greatly reducesthe volume of words used in the MWOs and links words,including misspellings, that have the same or similar meanings.However, one issue with the current iteration of thistool, along with practical usage of data-annotation tools onthe shop-floor more generally, is the usage of only one expertannotator at a time. How do we know that the classificationsof a single annotator are correct, or if it is, for example, feasibleto divide the tagging task among multiple experts? Thispaper examines the agreement behavior of multiple isolatedexperts classifying and annotating MWO data, and providesimplications for implementing this tagging technique for usein authentic contexts. The results described here will help improveMWO classification leading to more accurate analysisof MWOS for decision-making support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.