Analyzing conflicts and political violence around the world is a persistent challenge in the political science and policy communities due in large part to the vast volumes of specialized text needed to monitor conflict and violence on a global scale. To help advance research in political science, we introduce ConfliBERT, a domain-specific pre-trained language model for conflict and political violence. We first gather a large domain-specific text corpus for language modeling from various sources. We then build ConfliBERT using two approaches: pre-training from scratch and continual pretraining. To evaluate ConfliBERT, we collect 12 datasets and implement 18 tasks to assess the models' practical application in conflict research. Finally, we evaluate several versions of ConfliBERT in multiple experiments. Results consistently show that ConfliBERT outperforms BERT when analyzing political violence and conflict. Our code is publicly available. 1 While many language models are built on general domain corpora, such as Wikipedia, Book-Corpus (Zhu et al., 2015), and WebText (Radford et al., 2019), recent works show that pre-training on domain-specific corpora can boost downstream performance on those domains (Lee et al., 2019; Gururangan et al., 2020). Domain-specific work in bio-medicine focuses not only on developing pre-trained models (