The spread of fake news poses a serious threat to public trust and informed decision-making. The emergence of AI-generated fake news exacerbates this issue, as it can be produced more rapidly and convincingly than human-generated content. Addressing this issue, our paper introduces a robust approach to detect AI-generated fake news that is applicable across major disinformation domains: Politics and Elections, Health and Medicine, Science and Technology, and Entertainment and Celebrities. Our models differentiate between AI-generated and human-generated news articles while assessing their truthfulness. Utilizing large language models such as ChatGPT, we curated datasets spanning both AI and human generated fake and genuine news. Following extensive data preprocessing and the application of diverse feature extraction techniques, our models were trained using multiple machine learning algorithms. Our results show the feasibility of accurately identifying AI-generated fake news, thereby adding a significant layer of defense in combating misinformation across various sectors. In addition, we have explored the potential utility of transferable machine learning models trained on data from one domain (e.g., politics), when tested on another domain (e.g., health).