Online chat functions as a discussion channel for diverse social issues. However, deliberative discussion and consensus-reaching can be difficult in online chats in part because of the lack of structure. To explore the feasibility of a conversational agent that enables deliberative discussion, we designed and developed DebateBot, a chatbot that structures discussion and encourages reticent participants to contribute. We conducted a 2 (discussion structure: unstructured vs. structured) × 2 (discussant facilitation: unfacilitated vs. facilitated) between-subjects experiment (N = 64, 12 groups). Our findings are as follows: (1) Structured discussion positively affects discussion quality by generating diverse opinions within a group and resulting in a high level of perceived deliberative quality. (2) Facilitation drives a high level of opinion alignment between group consensus and independent individual opinions, resulting in authentic consensus reaching. Facilitation also drives more even contribution and a higher level of task cohesion and communication fairness. Our results suggest that a chatbot agent could partially substitute for a human moderator in deliberative discussions.
Background Prolonged time of computer use increases the prevalence of ocular problems, including eye strain, tired eyes, irritation, redness, blurred vision, and double vision, which are collectively referred to as computer vision syndrome (CVS). Approximately 70% of computer users have vision-related problems. For these reasons, properly designed interventions for users with CVS are required. To design an effective screen intervention for preventing or improving CVS, we must understand the effective interfaces of computer-based interventions. Objective In this study, we aimed to explore the interface elements of computer-based interventions for CVS to set design guidelines based on the pros and cons of each interface element. Methods We conducted an iterative user study to achieve our research objective. First, we conducted a workshop to evaluate the overall interface elements that were included in previous systems for CVS (n=7). Through the workshop, participants evaluated existing interface elements. Based on the evaluation results, we eliminated the elements that negatively affect intervention outcomes. Second, we designed our prototype system LiquidEye that includes multiple interface options (n=11). Interface options included interface elements that were positively evaluated in the workshop study. Lastly, we deployed LiquidEye in the real world to see how the included elements affected the intervention outcomes. Participants used LiquidEye for 14 days, and during this period, we collected participants’ daily logs (n=680). Additionally, we conducted prestudy and poststudy surveys, and poststudy interviews to explore how each interface element affects participation in the system. Results User data logs collected from the 14 days of deployment were analyzed with multiple regression analysis to explore the interface elements affecting user participation in the intervention (LiquidEye). Statistically significant elements were the instruction page of the eye resting strategy (P=.01), goal setting of the resting period (P=.009), compliment feedback after completing resting (P<.001), a mid-size popup window (P=.02), and CVS symptom-like effects (P=.004). Conclusions Based on the study results, we suggested design implications to consider when designing computer-based interventions for CVS. The sophisticated design of the customization interface can make it possible for users to use the system more interactively, which can result in higher engagement in managing eye conditions. There are important technical challenges that still need to be addressed, but given the fact that this study was able to clarify the various factors related to computer-based interventions, the findings are expected to contribute greatly to the research of various computer-based intervention designs in the future.
In this research, we suggest an interactive ambient lighting called HabitStar that helps users improve habits. HabitStar is a star-shaped lighting that can connect with the HabitStar mobile application. The lighting is synchronized with the app by the network server. The user can interact with both the lighting and the app to manage their habits. It helps the user recognize their goals by making them self-record their trials. It also encourages minimum approach for changing habits by providing the norm data as an indicator for practical goals. The lighting can help the user stay motivated by its ambient visualization.
BACKGROUND A medical referral is a letter written to explain a patient’s treatment progress for the subsequent healthcare professional to continue clinical treatment. Nevertheless, clinicians writing a referral often experience cognitive burdens from reviewing electronic health records (EHRs) under a time constraint, leading to an insufficient level of completion. Especially for chronic diseases like gout, errors are inclined to occur as these patients have a long course of medical history. While literature has highlighted the potential of summarization and visualization of EHRs to assist with such administrative tasks, little is known about the design of such support systems for clinicians. OBJECTIVE This study aims to (1) establish understanding of clinicians’ medical referral writing practices and the use of EHRs, (2) design and develop a system prototype (Dr.Aft) to support clinicians with writing medical referrals, and (3) evaluate its usability with medical specialists. METHODS To acquire understanding of clinicians’ workflow with medical referral writing, we conducted a preliminary study through observations of ambulatory care sessions and contextual inquiries on clinical summarization of patient EHR data. Afterwards, three design sessions with two clinical researchers were conducted to discuss clinicians’ needs when interacting with EHRs to write a referral and iteratively test possible system features. Based on the findings from the preliminary study and design sessions, we created a system prototype (Dr.Aft) which was evaluated by ten medical specialists. Through think-aloud activities and post-use interviews after using our prototype, the results were analyzed qualitatively by the researchers. RESULTS Findings from the design sessions highlighted main system features of Dr.Aft including (1) referral draft generation, (2) overview of patient medical history via text summaries and visualization, and (3) grouping patient visits into important clinical events. Evaluation with clinicians showed that Dr.Aft can be a practical tool for them when writing medical referrals by facilitating their inspection of patient medical history. However, several system issues, such as clinicians’ personalized preferences, discrepancy between presented medication data and actual workflow, and suspicion of information extracted from unstructured data were discovered as well. CONCLUSIONS This study introduces a case study of designing an EHR-driven clinician support system to aid healthcare providers’ efficiency handling administrative work like writing a medical referral. Our findings propose design implications for similar systems in need, recommend caution in utilizing unstructured medical data, and call for system flexibility to fulfill individual preferences. Future work is required to broaden its clinical scope to more complex diseases and a diverse pool of stakeholders like patients or caregivers.
BACKGROUND Prolonged time of computer use increased the prevalence of ocular problems including eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. Approximately 70 percent of computer users have vision-related problems. To design the effective screen intervention for preventing or improving computer vision syndrome, we must understand the effective interfaces of computer-based intervention (CBI). OBJECTIVE In this study, we aim to explore the interface elements of computer-based intervention for computer vision syndrome to set design guidelines based on pros/cons of each interface element. METHODS We conducted iterative user study to achieve our research goal. First, we conducted workshop to evaluate overall interface elements that are included in the previous systems for computer vision syndrome (N=7). Second, we designed and deployed our prototype LiquidEye with the multiple interface options to the users in the wild (N=11). Participants used LiquidEye for 14 days and during these period, we collected participants’ daily log (N=680). Also, we conducted pre and post survey and post-hoc interviews to explore how each interface element affects system acceptability. RESULTS We have collected 19 interface elements for designing intervention system for CVS from the workshop, then, deployed our first prototype LiquidEye. After deployment of LiquidEye, we conducted multiple regression analysis with the user data log to analyze significant elements affecting user participation of the LiquidEye. The significant elements include instruction page of eye rest strategy (P<.05), goal setting of resting period (P<.01), compliment page after user complete the resting (P<.0.001), middle-size popup window(P<.05), and symptom-like visual affect that alarms eye resting time (P<.0.005). CONCLUSIONS We suggest design implications to consider when designing CBI for computer vision syndrome. The sophisticated design of the customizing interface can make it possible for users to use the system more interactively which results in higher engagement and management of eye condition. There are important technical challenges still to address, but given the fact that this study has been able to sort out various factors related to computer-based intervention, it is expected to contribute greatly to the research of various CBI designs in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.