BackgroundTools proposed to triage patient acuity in COVID-19 infection have only been validated in hospital populations. We estimated the accuracy of five risk-stratification tools recommended to predict severe illness and compared accuracy to existing clinical decision making in a prehospital setting.MethodsAn observational cohort study using linked ambulance service data for patients attended by Emergency Medical Service (EMS) crews in the Yorkshire and Humber region of England between 26 March 2020 and 25 June 2020 was conducted to assess performance of the Pandemic Respiratory Infection Emergency System Triage (PRIEST) tool, National Early Warning Score (NEWS2), WHO algorithm, CRB-65 and Pandemic Medical Early Warning Score (PMEWS) in patients with suspected COVID-19 infection. The primary outcome was death or need for organ support.ResultsOf the 7549 patients in our cohort, 17.6% (95% CI 16.8% to 18.5%) experienced the primary outcome. The NEWS2 (National Early Warning Score, version 2), PMEWS, PRIEST tool and WHO algorithm identified patients at risk of adverse outcomes with a high sensitivity (>0.95) and specificity ranging from 0.3 (NEWS2) to 0.41 (PRIEST tool). The high sensitivity of NEWS2 and PMEWS was achieved by using lower thresholds than previously recommended. On index assessment, 65% of patients were transported to hospital and EMS decision to transfer patients achieved a sensitivity of 0.84 (95% CI 0.83 to 0.85) and specificity of 0.39 (95% CI 0.39 to 0.40).ConclusionUse of NEWS2, PMEWS, PRIEST tool and WHO algorithm could improve sensitivity of EMS triage of patients with suspected COVID-19 infection. Use of the PRIEST tool would improve sensitivity of triage without increasing the number of patients conveyed to hospital.
Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.
Abstract:The impact of Crowdsourcing and citizen science activities on academia, businesses, governance and society has been enormous. This is more prevalent today with citizens and communities collaborating with organizations, businesses and authorities to contribute in a variety of manners, starting from mere data providers to being key stakeholders in various decision-making processes. The "Crowdsourcing for observations from Satellites" project is a recently concluded study supported by demonstration projects funded by European Space Agency (ESA). The objective of the project was to investigate the different facets of how crowdsourcing and citizen science impact upon the validation, use and enhancement of Observations from Satellites (OS) products and services. This paper presents our findings in a stakeholder analysis activity involving participants who are experts in crowdsourcing, citizen science for Earth Observations. The activity identified three critical areas that needs attention by the community as well as provides suggestions to potentially help in addressing some of the challenges identified.
Abstract. Semantic formalisms represent content in a uniform way according to ontologies. This enables manipulation and reasoning via automated means (e.g. Semantic Web services), but limits the user's ability to explore the semantic data from a point of view that originates from knowledge representation motivations. We show how, for user consumption, a visualization of semantic data according to some easily graspable dimensions (e.g. space and time) provides effective sense-making of data. In this paper, we look holistically at the interaction between users and semantic data, and propose multiple visualization strategies and dynamic filters to support the exploration of semantic-rich data. We discuss a user evaluation and how interaction challenges could be overcome to create an effective user-centred framework for the visualization and manipulation of semantic data. The approach has been implemented and evaluated on a real company archive.
Abstract. The manufacturing industry offers a huge range of opportunities and challenges for exploiting semantic web technologies. Collating heterogeneous data into semantic knowledge repositories can provide immense benefits to companies, however the power of such knowledge can only be realised if end users are provided visual means to explore and analyse their datasets in a flexible and efficient way. This paper presents a high level approach to unify, structure and visualise document collections using semantic web and information extraction technologies.
Introduction Home assessments are integral to the occupational therapy role, providing opportunities to personalise and integrate care. However, they are resource intensive and declining in number. A 3-month service development within one United Kingdom National Health Service acute hospital setting explored the concept of using digital technology to undertake remote home assessments. Methods Four work streams explored the concept’s feasibility and acceptability: real-world testing; user consultations; narrative case study collection; traditional visit resource use exploration. Project participants were occupational therapists and patient and public representatives recruited via snowball sampling or critical case sampling. Qualitative data were thematically analysed identifying key themes. Analysis of quantitative data provided descriptive statistics. Findings The remote home visit concept was feasible within four specific contexts. Qualitative themes suggest acceptability depends on visitor safety, visitor training, visitor induction and standardisation of practice. Consultees perceived the approach to have potential for resource savings, personalisation and integration of care. Barriers to acceptance included data security, data governance, technology failure and threat to occupational therapists’ role and skills. Conclusion Applying digital technology to occupational therapy home assessment appears feasible and acceptable within a specific context. Further research is recommended to develop the technology, and test and investigate perceived benefits within wider contexts and stakeholder groups.
One of the open problems in Semantic Web research is which tools should be provided to users to explore linked data. This is even more urgent now that massive amount of linked data is being released by governments worldwide. The development of single dedicated visualization applications is increasing, but the problem of exploring unknown linked data to gain a good understanding of what is contained is still open. An effective generic solution must take into account the user's point of view, their tasks and interaction, as well as the system's capabilities and the technical constraints the technology imposes. This paper is a first step in understanding the implications of both, user and system by evaluating our dashboard-based approach. Though we observe a high user acceptance of the dashboard approach, our paper also highlights technical challenges arising out of complexities involving current infrastructure that need to be addressed while visualising linked data. In light of the findings, guidelines for the development of linked data visualization (and manipulation) are provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.