“…and Table II. (3) Compared with the PWDWP method, the proposed BPDG has a great improvement in context consistency. This is due to the effect of the CMIM criterion, which selects the response from the generated the candidate list under the condition of the bilateral personas and the context.…”
Section: B Results and Analysismentioning
confidence: 99%
“…For this purpose, the dialogue agents have to learn to express personalized information appropriately like humans. Currently, personalized dialogue agents have been widely applied in various human-robot interaction scenarios, such as intelligent personal assistants [2], public service robots [3], wearable devices [4], etc. The agents with personalization are considered reliable and trustworthy, and can gain the user's confidence and trust [5].…”
Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot's pre-assigned persona, while ignoring the user's persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a bilateral personalized dialogue generation (BPDG) method with dynamic persona-aware fusion via multi-task transfer learning to generate responses consistent with both personas. The proposed method aims to accomplish three learning tasks: 1) an encoder is trained with dialogue utterances added with corresponded personalized attributes and relative position (language model task), 2) a dynamic persona-aware fusion module predicts the persona presence to adaptively fuse the contextual and bilateral personas encodings (persona prediction task) and 3) a decoder generates natural, fluent and personalized responses (dialogue generation task). To make the generated responses more personalized and bilateral persona-consistent, the Conditional Mutual Information Maximum (CMIM) criterion is adopted to select the final response from the generated candidates. The experimental results show that the proposed method outperforms several state-of-the-art methods in terms of both automatic and manual evaluations.
“…and Table II. (3) Compared with the PWDWP method, the proposed BPDG has a great improvement in context consistency. This is due to the effect of the CMIM criterion, which selects the response from the generated the candidate list under the condition of the bilateral personas and the context.…”
Section: B Results and Analysismentioning
confidence: 99%
“…For this purpose, the dialogue agents have to learn to express personalized information appropriately like humans. Currently, personalized dialogue agents have been widely applied in various human-robot interaction scenarios, such as intelligent personal assistants [2], public service robots [3], wearable devices [4], etc. The agents with personalization are considered reliable and trustworthy, and can gain the user's confidence and trust [5].…”
Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot's pre-assigned persona, while ignoring the user's persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a bilateral personalized dialogue generation (BPDG) method with dynamic persona-aware fusion via multi-task transfer learning to generate responses consistent with both personas. The proposed method aims to accomplish three learning tasks: 1) an encoder is trained with dialogue utterances added with corresponded personalized attributes and relative position (language model task), 2) a dynamic persona-aware fusion module predicts the persona presence to adaptively fuse the contextual and bilateral personas encodings (persona prediction task) and 3) a decoder generates natural, fluent and personalized responses (dialogue generation task). To make the generated responses more personalized and bilateral persona-consistent, the Conditional Mutual Information Maximum (CMIM) criterion is adopted to select the final response from the generated candidates. The experimental results show that the proposed method outperforms several state-of-the-art methods in terms of both automatic and manual evaluations.
“…As the data sources for outdoor environmental data were also reliable, it will be continued. Android app questionnaire will be replaced with a chatbot to improve ease of use [47,48]. As to our future study, we are now working with the School of Medicine at the University of South Carolina that has access to a larger cohort of patients under the care of a larger number of clinicians and specialists.…”
Background
Asthma is a chronic pulmonary disease with multiple triggers. It can be managed by strict adherence to an asthma care plan and by avoiding these triggers. Clinicians cannot continuously monitor their patients’ environment and their adherence to an asthma care plan, which poses a significant challenge for asthma management.
Objective
In this study, pediatric patients were continuously monitored using low-cost sensors to collect asthma-relevant information. The objective of this study was to assess whether kHealth kit, which contains low-cost sensors, can identify personalized triggers and provide actionable insights to clinicians for the development of a tailored asthma care plan.
Methods
The kHealth asthma kit was developed to continuously track the symptoms of asthma in pediatric patients and monitor the patients’ environment and adherence to their care plan for either 1 or 3 months. The kit consists of an Android app–based questionnaire to collect information on asthma symptoms and medication intake, Fitbit to track sleep and activity, the Peak Flow meter to monitor lung functions, and Foobot to monitor indoor air quality. The data on the patient’s outdoor environment were collected using third-party Web services based on the patient’s zip code. To date, 107 patients consented to participate in the study and were recruited from the Dayton Children’s Hospital, of which 83 patients completed the study as instructed.
Results
Patient-generated health data from the 83 patients who completed the study were included in the cohort-level analysis. Of the 19% (16/83) of patients deployed in spring, the symptoms of 63% (10/16) and 19% (3/16) of patients suggested pollen and particulate matter (PM2.5), respectively, to be their major asthma triggers. Of the 17% (14/83) of patients deployed in fall, symptoms of 29% (4/17) and 21% (3/17) of patients suggested pollen and PM2.5, respectively, to be their major triggers. Among the 28% (23/83) of patients deployed in winter, PM2.5 was identified as the major trigger for 83% (19/23) of patients. Similar correlations were not observed between asthma symptoms and factors such as ozone level, temperature, and humidity. Furthermore, 1 patient from each season was chosen to explain, in detail, his or her personalized triggers by observing temporal associations between triggers and asthma symptoms gathered using the kHealth asthma kit.
Conclusions
The continuous monitoring of pediatric asthma patients using the kHealth asthma kit generates insights on the relationship between their asthma symptoms and triggers across different seasons. This can ultimately inform personalized asthma management and intervention plans.
“…Generally, the fast-moving development environment and the tendency for improving software reusability [83,91], means that developers are often looking for easy-to-install frameworks or tools with minimal cognitive overhead, in an affordable and accessible way [29,95]. One way that industry has responded to the interest by non-experts in using ML and AI, is through the provision of cognitive services, algorithmic processes inspired by human cognition, for solving various narrow-AI problems such as sentiment analysis of text, translation of text from one language to another, or visual recognition of objects and concepts depicted in images and video.…”
Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.