A backdoored deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments, particularly in security-sensitive applications such as face recognition and autonomous driving. To mitigate such newly revealed adversarial attacks, great efforts have been made. Nonetheless, state-of-the-art defenses are either limited to specific backdoor attacks (i.e., source-agnostic attacks) or non-user-friendly in that machine learning (ML) expertise and/or expensive computing resources are required.This work observes that all existing backdoor attacks have an inadvertent and inevitable intrinsic weakness, termed as non-transferability, that is, a trigger input hijacks a backdoored model but cannot be effective to an another model that has not been implanted with the same backdoor. With this key observation, we propose non-transferability enabled backdoor detection (NTD) to identify trigger inputs for a model-under-test (MUT) during run-time. Specifically, NTD allows a potentially backdoored MUT to predict a class for an input. In the meantime, NTD leverages a feature extractor (FE) to extract feature vectors for the input and a group of samples randomly picked from its predicted class, and then compares similarity between the input and the samples in the FE's latent space. If the similarity is low, the input is an adversarial trigger input; otherwise, it is benign. The FE is a free pre-trained model privately reserved from open platforms (e.g., ModelZoo) by a user and thus NTD does not require any ML expertise or costly computations from the user. As the FE and MUT are from different sources-the former can indeed be provided by a reputable party, the attacker is very unlikely to insert the same backdoor into both of them.