Procrastination is a widespread detrimental human behavior. Virtually everyone delays the initiation or completion of important tasks at times. Some people procrastinate to the point that they become overwhelmed by their inaction. In particular, academic procrastination is estimated to afflict 70 to 90% of undergraduate college students. We adopt the design science problem-solving paradigm to pilot a socio-technical artifact that reduces academic procrastination in large college classrooms. We adopt the principles of nudging to propose three meta-requirements and nine design principles underlying the design of a chatbot that induces students into positive and self-reinforcing behaviors countering procrastination tendencies. We use a formative natural evaluation event to provide preliminary validation for the design. The pilot provides encouraging results both in terms of use of the artifact by the intended audience and of performance improvement and can therefore be used to inform future design iterations.
PurposeThe importance of artificial intelligence in human resource management has grown substantially. Previous literature discusses the advantages of AI implementation at a workplace and its various consequences, often hostile, for employees. However, there is little empirical research on the topic. The authors address this gap by studying if individuals oppose biased algorithm recommendations regarding disciplinary actions in an organisation.Design/methodology/approachThe authors conducted an exploratory experiment in which the authors evaluated 76 subjects over a set of 5 scenarios in which a biased algorithm gave strict recommendations regarding disciplinary actions at a workplace.FindingsThe authors’ results suggest that biased suggestions from intelligent agents can influence individuals who make disciplinary decisions.Social implicationsThe authors’ results contribute to the ongoing debate on applying AI solutions to HR problems. The authors demonstrate that biased algorithms may substantially change how employees are treated and show that human conformity towards intelligent decision support systems is broader than expected.Originality/valueThe authors’ paper is among the first to show that people may accept recommendations that provoke moral dilemmas, bring adverse outcomes, or harm employees. The authors introduce the problem of “algorithmic conformism” and discuss its consequences for HRM.
The use of robotics is becoming widespread in healthcare. However, little is known about how robotics can affect the relationship with patients in epidemic emergency response or how it impacts clinicians in their organization and work. As a hospital responding to the consequences of the COVID-19 pandemic “ASST dei Sette Laghi” (A7L) in Varese, Italy, had to react quickly to protect its staff from infection while coping with high budgetary pressure as prices of Personal Protection Equipment (PPE) increased rapidly. In response, it introduced six semi-autonomous robots to mediate interactions between staff and patients. Thanks to the cooperation of multiple departments, A7L implemented the solution in less than 10 weeks. It reduced risks to staff and outlay for PPE. However, the characteristics of the robots affected their perception by healthcare staff. This case study reviews critical issues faced by A7L in introducing these devices and recommendations for the path forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.