The emergence of large-scale pre-trained language models (PLMs), such as ChatGPT, creates opportunities for malicious actors to disseminate disinformation, necessitating the development of automated techniques for detecting machine-generated content. However, current approaches, which predominantly rely on fine-tuning a PLM, face difficulties in identifying text beyond the scope of the detector's training corpus. This is a typical situation in practical applications, as it is impossible for the training corpus to encompass every conceivable disinformation domain. To overcome these limitations, we introduce STADEE, a STAtistics-based DEEp detection method that integrates essential statistical features of text with a sequence-based deep classifier. We utilize various statistical features, such as the probability, rank, cumulative probability of each token, as well as the information entropy of the distribution at each position. Cumulative probability is especially significant, as it is explicitly designed for nucleus sampling, the most prevalent text generation algorithm currently. To assess the efficacy of our proposed technique, we employ and develop three distinct datasets covering various domains and models: HC3-Chinese, ChatGPT-CNews, and CPM-CNews. Based on these datasets, we establish three separate experimental configurations-namely, in-domain, out-of-domain, and in-the-wild-to evaluate the generalizability of our detectors. Experimental outcomes reveal that STADEE achieves an F1 score of 87.05% in the in-domain setting, a 9.28% improvement over conventional statistical methods. Furthermore, in both the out-of-domain and in-the-wild settings, STADEE not only surpasses traditional statistical methods but also demonstrates a 5.5% enhancement compared to fine-tuned PLMs. These findings underscore the generalizability of our STADEE in detecting machine-generated text.