International audience
Tokenization of modern and old Western European languages seems to be fairly simple, as it stands on the presence mostly of markers such as spaces and punctuation. However, when dealing with old sources like manuscripts written in scripta continua, antiquity epigraphy or Middle Age manuscripts, (1) such markers are mostly absent, (2) spelling variation and rich morphology make dictionary based approaches difficult. Applying convolutional encoding to characters followed by linear categorization to word-boundary or in-word-sequence is shown to be effective at tokenizing such inputs. Additionally, the software is released with a simple interface for tokenizing a corpus or generating a training set.
This paper describes the process of building an annotated corpus and training
models for classical French literature, with a focus on theatre, and
particularly comedies in verse. It was originally developed as a preliminary
step to the stylometric analyses presented in Cafiero and Camps [2019]. The use
of a recent lemmatiser based on neural networks and a CRF tagger allows to
achieve accuracies beyond the current state-of-the art on the in-domain test,
and proves to be robust during out-of-domain tests, i.e.up to 20th c.novels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.