Abstract-Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary processincorporating these networks into mission critical processes such as medical diagnosis, planning and control -requires a level of trust association with the machine output.Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide humanunderstandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks.Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the lowlevel network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.
This paper describes the ongoing development of a robotic control architecture that was inspired by computational cognitive architectures from the discipline of cognitive psychology. The robotic control architecture combines symbolic and subsymbolic representations of knowledge into a unified control structure. The architecture is organized as a goal driven, serially executing, production system at the highest symbolic level; and a multiple algorithm, parallel executing, simple collection of algorithms at the lowest subsymbolic level. The goal is to create a system that will progress through the same cognitive developmental milestones as do human infants. Common robotics problems of localization, object recognition, and object permanence are addressed within the specified framework.
The debate over symbolic versus sub-symbolic representations of human cognition has been continuing for thirty years, with little indication of a resolution. The argument is this: Does the human cognitive system use symbols as a representation of knowledge, and does it process symbols and their respective constituents? Or does the human cognitive system use a distributed representation of knowledge, and is it somehow capable of processing this distributed representation of knowledge in a complex and meaningful way? This paper argues for an integrated symbolic and sub-symbolic approach to the representation of cognition. The lines of reasoning used as evidence to bolster this argument for an integrated approach are the cognitive architecture the Adaptive Character of Thought-Rational (ACT-R), and biology, where it is argued that symbolic and sub-symbolic representations of cognition are part of an intellectual continuum, with sub-symbolic representations at the low end and symbolic representations at the higher end.
Increasingly, system developers are relying on modeling and simulation to support early design decisions. In turn, to support effective, timely use of models and simulations, verification, validation, and, in some cases, accreditation (VV&A) are required. The soldier-system analysis tools collectively known as Hardware vs. Manpower (HARDMAN) I11 underwent a formal VV&A process, the first of its type in the Army. The first phase comprised the core task network modeling capability and the effects implemented as additions to or modifications of the task data--mental workload estimation and environmental degradation, personnel characteristics, and training. A review board of representative users, policy-makers, technical experts, and soldier proponents evaluated the findings against eight criteria--configuration management, software verification, documentation, data input requirements, model granularity, validity of modeling techniques and embedded algorithms, output, and analysis timelines. All criteria were satisfied and formal accreditation was granted with only limited caveats.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.