The complexity and non-linear dynamics of socio-motor phenomena underlying social interactions are often missed by observation methods that attempt to capture, describe, and rate the exchange in real time. Unknowingly to the rater, socio-motor behaviors of a dyad exert mutual influence over each other through subliminal mirroring and shared cohesiveness that escape the naked eye. Implicit in these ratings nonetheless is the assumption that the other participant of the social dyad has an identical nervous system as that of the interlocutor, and that sensory-motor information is processed similarly by both agents’ brains. What happens when this is not the case? We here use the Autism Diagnostic Observation Schedule (ADOS) to formally study social dyadic interactions, at the macro- and micro-level of behaviors, by combining observation with digital data from wearables. We find that integrating subjective and objective data reveals fundamentally new ways to improve standard clinical tools, even to differentiate females from males using the digital version of the test. More generally, this work offers a way to turn a traditional, gold-standard clinical instrument into an objective outcome measure of human social behaviors and treatment effectiveness.
In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically only assessed through expensive user studies. Here we propose two complementary metrics to algorithmically evaluate the interpretability of saliency map explanations. One metric assesses perceptual interpretability by quantifying the visual coherence of the saliency map. The second metric assesses semantic interpretability by capturing the degree of overlap between the saliency map and textbook features-features human experts use to make a classification. We use a melanoma dataset and a deep-neural network classifier as a case-study to explore how our two interpretability metrics relate to each other and a faithfulness metric. Across six commonly used saliency methods, we find that none achieves high scores across all three metrics for all test images, but that different methods perform well in different regions of the data distribution. This variation between methods can be leveraged to consistently achieve high interpretability and faithfulness by using our metrics to inform saliency mask selection on a case-by-case basis. Our interpretability metrics provide a new way to evaluate saliency-based explanations and allow for the adaptive combination of saliency-based explanation methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.