From the perspective of Text World Theory, narratives contain elements (indications of time, place, characters, etc.) that can be automatically identified and compared to establish versions of events and similar plots based on these elements. We have annotated a corpus of fairy tales and short stories, TextWorlds, and discovered that raters do not always agree on whether a particular word refers to a character, time, or place of action. The aim of the research is to determine the degree of inter-rater agreement regarding the position of these narrative categories in the text. The practical task of the research is to assess the reliability of the annotation that will be used to train algorithms for automatically identifying text worlds. The scientific novelty lies in the fact that we are specifically studying the degree of agreement, whereas in other works, agreement is taken for granted, and if raters disagree with each other, it is perceived as an error by one of the raters or the annotation procedure. In this paper, we present the results of two expert agreement metrics: percent agreement and Krippendorff’s alpha. The obtained results for these metrics show that agreement regarding different elements varies depending on the work and sometimes reaches an average level, sufficient to speak of the reliability of the annotation.