Bipolar disorder, a severe chronic mental illness characterized by pathological mood swings from depression to mania, requires ongoing symptom severity tracking to both guide and measure treatments that are critical for maintaining long-term health. Mental health professionals assess symptom severity through semi-structured clinical interviews. During these interviews, they observe their patients' spoken behaviors, including both what the patients say and how they say it. In this work, we move beyond acoustic and lexical information, investigating how higher-level interactive patterns also change during mood episodes. We then perform a secondary analysis, asking if these interactive patterns, measured through dialogue features, can be used in conjunction with acoustic features to automatically recognize mood episodes. Our results show that it is beneficial to consider dialogue features when analyzing and building automated systems for predicting and monitoring mood.In contrast to previous work, the novelty of our work is three-fold: (1) we introduce a set of dialogue features to aid in the prediction of mood symptom severity; (2) we analyze dialogue features using a linear mixed effect model to study how mood episodes affect interaction patterns; (3) we show that explicitly adding high-level dialogue features to acoustic-based systems can improve the performance of automatic mood symptom severity prediction.