“…Baselines For knowledge identification, we compare UniGDD with several strong baselines, including BERTQA (Devlin et al, 2019), BERT-PR (Daheim et al, 2021), RoBERTa-PR (Daheim et al, 2021), Multi-Sentence (Wu et al, 2021), and DI-ALKI (Wu et al, 2021). These models formulate knowledge identification as the machine reading comprehension task and extract the grounding span from the document.…”