As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lower-resolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. We also propose a layer selection strategy that applies to the whole family of CNN-based models, based on which our extraction framework is applied to visualize the last layers of each convolutional block of the model. Moreover, we perform an empirical analysis of the efficacy of derived lower-level information to enhance the represented attributions. Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both ground-truth and model-truth based evaluation metrics validate our proposed algorithm by meeting or outperforming the state-of-the-art methods in terms of explanation ability and visual quality, demonstrating that our method shows stability regardless of the size of objects or instances to be explained.
As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lowerresolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. We also propose a layer selection strategy that applies to the whole family of CNN-based models, based on which our extraction framework is applied to visualize the last layers of each convolutional block of the model. Moreover, we perform an empirical analysis of the efficacy of derived lower-level information to enhance the represented attributions. Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both groundtruth and model-truth based evaluation metrics validate our proposed algorithm by meeting or outperforming the stateof-the-art methods in terms of explanation ability and visual quality, demonstrating that our method shows stability regardless of the size of objects or instances to be explained.
Recent advancements in signal processing and communication systems have resulted in evolution of an intriguing concept referred to as Internet of Things (IoT). By embracing the IoT evolution, there has been a surge of recent interest in localization/tracking within indoor environments based on Bluetooth Low Energy (BLE) technology. The basic motive behind BLE-enabled IoT applications is to provide advanced residential and enterprise solutions in an energy efficient and reliable fashion. Although recently different state estimation (SE) methodologies, ranging from Kalman filters, Particle filters, to multiple-modal solutions, have been utilized for BLEbased indoor localization, there is a need for ever more accurate and real-time algorithms. The main challenge here is that multipath fading and drastic fluctuations in the indoor environment result in complex non-linear, non-Gaussian estimation problems. The paper focuses on an alternative solution to the existing filtering techniques and introduce/discuss incorporation of the Belief Condensation Filter (BCF) for localization via BLE-enabled beacons. The BCF is a member of the universal approximation family of densities with performance bound achieving accuracy and efficiency in sequential SE and Bayesian tracking. It is a resilient filter in harsh environments where nonlinearities and non-Gaussian noise profiles persist, as seen in such applications as Indoor Localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.