Background: It is thought that ChatGPT, an advanced language model developed by OpenAI, may in the future serve as an AI-assisted decision support tool in medicine. Objective: To evaluate the accuracy of ChatGPTs recommendations on medical questions related to common cardiac symptoms or conditions. Methods: We tested ChatGPTs ability to address medical questions in two ways. First, we assessed its accuracy in correctly answering cardiovascular trivia questions (n=50), based on quizzes for medical professionals. Second, we entered 20 clinical case vignettes on the ChatGPT platform and evaluated its accuracy compared to expert opinion and clinical course. Results: We found that ChatGPT correctly answered 74% of the trivia questions, with slight variation in accuracy in the domains coronary artery disease (80%), pulmonary and venous thrombotic embolism (80%), atrial fibrillation (70%), heart failure (80%) and cardiovascular risk management (60%). In the case vignettes, ChatGPTs response matched in 90% of the cases with the actual advice given. In more complex cases, where physicians (general practitioners) asked other physicians (cardiologists) for assistance or decision support, ChatGPT was correct in 50% of cases, and often provided incomplete or inappropriate recommendations when compared with expert consultation. Conclusions: Our study suggests that ChatGPT has potential as an AI-assisted decision support tool in medicine, particularly for straightforward, low-complex medical questions, but further research is needed to fully evaluate its potential.