Effective patient treatment and care depend heavily on accurate disease diagnosis. The availability of multimodal medical data in recent years, such as genetic profiles, clinical reports, and imaging scans, has created new possibilities for increasing diagnostic precision. However, because of their inherent complexity and variability, analyzing and integrating these varied data types present significant challenges. In order to overcome the difficulties of precise medical disease diagnosis using multi-modal data, this research suggests a novel approach that combines Transfer Learning (TL) and Deep Neural Networks (DNN). An image dataset that included images from various stages of Alzheimer's disease (AD) was collected from kaggle repository. In order to improve the quality of the signals or images for further analysis, a Gaussian filter is applied during the preprocessing stage to smooth out and reduce noise in the input data. The features are then extracted using Gray-Level Cooccurrence Matrix (GLCM). TL makes it possible for the model to use the information gained from previously trained models in other domains, requiring less training time and data. The trained model used in this approach is AlexNet. The classification of the disease is done using DNN. This integrated approach improves diagnostic precision particularly in scenarios with limited data availability. The study assesses the effectiveness of the suggested method for diagnosing AD, focusing on evaluation metrics such as accuracy, precision, miss rate, recall, F1-score, and the Area under the Receiver Operating Characteristic Curve (AUC-ROC). The approach is a promising tool for medical professionals to make more accurate and timely diagnoses, which will ultimately improve patient outcomes and healthcare practices. The results show significant improvements in accuracy (99.32%).