“…Recent years have shown promising examples of using symbolic background knowledge in learning systems: From training classifiers with weak supervision signals (Manhaeve et al, 2018), generalizing learned classifiers to new tasks (Roychowdhury et al, 2021), compensating for a lack of good supervised data (Diligenti et al, 2017;Donadello et al, 2017), to enforcing the structure of outputs through a logical specification (Xu et al, 2018). The main idea underlying these integrations of learning and reasoning, often called neurosymbolic integration, is that background knowledge can complement the neural network when one lacks high-quality labeled data (Giunchiglia et al, 2022). Although pure deep learning approaches excel when learning over vast quantities of data with gigantic amounts of compute (Chowdhery et al, 2022;Ramesh et al, 2022), we cannot afford this luxury for most tasks.…”