Abstract-Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary processincorporating these networks into mission critical processes such as medical diagnosis, planning and control -requires a level of trust association with the machine output.Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide humanunderstandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks.Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the lowlevel network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.
Saliency maps are a popular approach to creating post-hoc explanations of image classifier outputs. These methods produce estimates of the relevance of each pixel to the classification output score, which can be displayed as a saliency map that highlights important pixels. Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i.e. their “fidelity”). We therefore investigate existing metrics for evaluating the fidelity of saliency methods (i.e. saliency metrics). We find that there is little consistency in the literature in how such metrics are calculated, and show that such inconsistencies can have a significant effect on the measured fidelity. Further, we apply measures of reliability developed in the psychometric testing literature to assess the consistency of saliency metrics when applied to individual saliency maps. Our results show that saliency metrics can be statistically unreliable and inconsistent, indicating that comparative rankings between saliency methods generated using such metrics can be untrustworthy.
There are a large number of workflow systems designed to work in various scientific domains, including support for the Internet of Things (IoT). One such workflow system is Node-RED, which is designed to bring workflow-based programming to IoT. However, the majority of scientific workflow systems, and specifically systems like Node-RED, are designed to operate in a fixed networked environment, which rely on a central point of coordination in order to manage the workflow. The main focus of the work described in this paper is to investigate means whereby we can migrate Node-RED workflows into a decentralized execution environment, so that such workflows can run on Edge networks, where nodes are extremely transient in nature. In this work, we demonstrate the feasibility of such an approach by showing how we can migrate a Node-RED based traffic congestion workflow into a decentralized environment. The traffic congestion algorithm is implemented as a set of Web services within Node-RED and we have architected and implemented a system that proxies the centralized Node-RED services using cognitively-aware wrapper services, designed to operate in a decentralized environment. Our cognitive services use a Vector Symbolic Architecture to semantically represent service descriptions and workflows in a way that can be unraveled on the fly without any central point of control. The VSA-based system is capable of parsing Node-RED workflows and migrating them to a decentralized environment for execution; providing a way to use Node-RED as a front-end graphical composition tool for decentralized workflows.
Numerous workflow systems span multiple scientific domains and environments, and for the Internet of Things (IoT), Node-RED offers an attractive Web based user interface to execute IoT service-based workflows. However, like most workflow systems, it coordinates the workflow centrally, and cannot run within more transient environments where nodes are mobile. To address this gap, we show how Node-RED workflows can be migrated into a decentralized execution environment for operation on mobile ad-hoc networks, and we demonstrate this by converting a Node-RED based traffic congestion detection workflow to operate in a decentralized environment. The approach uses a Vector Symbolic Architecture (VSA) to dynamically convert Node-Red applications into a compact semantic vector representation that encodes the service interfaces and the workflow in which they are embedded. By extending existing services interfaces, with a simple cognitive layer that can interpret and exchange the vectors, we show how the required services can be dynamically discovered and interconnected into the required workflow in a completely decentralized manner. The resulting system provides a convenient environment where the Node-RED front-end graphical composition tool can be used to orchestrate decentralized workflows. In this paper, we further extend this work by introducing a new dynamic VSA vector compression scheme that compresses vectors for on-the-wire communication, thereby reducing communication bandwidth while maintaining the semantic information content. This algorithm utilizes the holographic properties of the symbolic vectors to perform compression taking into consideration the number of combined vectors along with similarity bounds that determine conflict with other encoded vectors used in the same context. The resulting savings make this approach extremely efficient for discovery in service based decentralized workflows.
Approximately 20% of the working population report symptoms of feeling fatigued at work. The aim of the study was to investigate whether an alternative mobile version of the 'gold standard' Psychomotor Vigilance Task (PVT) could be used to provide an objective indicator of fatigue in staff working in applied safety critical settings such as train driving, hospital staffs, emergency services, law enforcements, etc., using different mobile devices. 26 participants mean age 20 years completed a 25-minute reaction time study using an alternative mobile version of the Psychomotor Vigilance Task (m-PVT) that was implemented on either an Apple iPhone 6s Plus or a Samsung Galaxy Tab 4. Participants attended two sessions: a morning and an afternoon session held on two consecutive days counterbalanced. It was found that the iPhone 6s Plus generated both mean speed responses (1/RTs) and mean reaction times (RTs) that were comparable to those observed in the literature while the Galaxy Tab 4 generated significantly lower 1/RTs and slower RTs than those found with the iPhone 6s Plus. Furthermore, it was also found that the iPhone 6s Plus was sensitive enough to detect lower mean speed of responses (1/RTs) and significantly slower mean reaction times (RTs) after 10-minutes on the m-PVT. In contrast, it was also found that the Galaxy Tab 4 generated mean number of lapses that were significant after 5-minutes on the m-PVT. These findings seem to indicate that the m-PVT could be used to provide an objective indicator of fatigue in staff working in applied safety critical settings such as train driving, hospital staffs, emergency services, law enforcements, etc.
Abstract-Our particular research in the Distributed Analytics and Information Science International Technology Alliance (DAIS ITA) is focused on "Anticipatory Situational Understanding for Coalitions". This paper takes the concrete example of detecting and predicting traffic congestion in the UK road transport network from existing generic sensing sources, such as real-time CCTV imagery and video, which are publicly available for this purpose. This scenario has been chosen carefully as we believe that in a typical city, all data relevant to transport network congestion information is not generally available from a single unified source, and that different organizations in the city (e.g. the weather office, the police force, the general public, etc.) have their own different sensors which can provide information potentially relevant to the traffic congestion problem. In this paper we are looking at the problem of (a) identifying congestion using cameras that, for example, the police department may have access to, and (b) fusing that with other data from other agencies in order to (c) augment any base data provided by the official transportation department feeds. By taking this coalition approach this requires using standard cameras to do different supplementary tasks like car counting, and in this paper we examine how well those tasks can be done with RNN/CNN, and other distributed machine learning processes.In this paper we provide details of an initial four-layer architecture and potential tooling to enable rapid formation of human/machine hybrid teams in this setting, with a focus on opportunistic and distributed processing of the data at the edge of the network. In future work we plan to integrate additional data-sources to further augment the core imagery data.
In this paper we provide a critical analysis with metrics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight-i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight-i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.