Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability---they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as "black box" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes.
Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability -they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as "black box" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes.
Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.
Objective: To examine frontline providers' experiences implementing home-based palliative care (HBPC) covered by a private health insurer in partnership with community-based hospice, home health, and Accountable Care Organizations.Study setting: Primary data collection at three community-based hospice and home health organizations in Northern and Southern California at the outset of the new private payer-contracted HBPC.Study design: Qualitative focus groups with frontline HBPC providers.Data collection: Focus groups were guided by a nine-item, semi-structured research protocol, audio-recorded, transcribed verbatim, and analyzed by two independent coders using a grounded theory approach.Principal findings: Participants (n = 24) were mostly White (79.2%) female (91.7%) aged 39 years or less (62.5%), and from diverse disciplines. Three major themes were identified: (1) patient referrals, (2) organizational factors, and (3) HBPC reimbursement. Findings highlight barriers and facilitators to implementing HBPC covered by an insurer including the organization's community reputation, the dynamic/ "teaminess" of the HBPC team, having a site champion/"quarterback," and issues from a siloed medical system. Participants also discussed challenges with patient referrals, specifically, lack of palliative care knowledge (both providers and patients/ families) and poor communication with patients referred to HBPC.Conclusions: This study found that despite a favorable perception of payercontracted HBPC by frontline providers, barriers and facilitators persist, with patient accrual/referral paramount.community-based organizations, frontline clinicians, health insurance, home healthcare, homebased palliative care, hospice, person-centered care, qualitative methods What is known on this topic• Home-based palliative care (HBPC) is an important way to deliver person-centered care for patients and caregivers affected by serious illnesses.• However, without a reimbursement stream, implementation and sustainment of HBPC programs outside of closed health systems have been stymied.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Torrie Fields was employed by Blue Shield of California at the time this study was conducted and for a portion of manuscript development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.