1987
DOI: 10.1109/tpc.1987.6449109
|View full text |Cite
|
Sign up to set email alerts
|

Readability formulas: Useful or useless?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0
1

Year Published

1987
1987
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(22 citation statements)
references
References 7 publications
0
21
0
1
Order By: Relevance
“…Readability of the patient letters was partially determined using the Flesch-Kincaid grade score. The Flesch-Kincaid grade score has been used as a measure of readability since its development in 1975, is the standard used in Department of Defense manuals (McClure 1987) and is recommended by the Centers for Disease Control and Prevention as a way to determine readability (2009). The Flesch-Kincade grade calculates the reading level (US school grade level) at which the reader can comprehend at least 50 % of the document.…”
Section: Discussionmentioning
confidence: 99%
“…Readability of the patient letters was partially determined using the Flesch-Kincaid grade score. The Flesch-Kincaid grade score has been used as a measure of readability since its development in 1975, is the standard used in Department of Defense manuals (McClure 1987) and is recommended by the Centers for Disease Control and Prevention as a way to determine readability (2009). The Flesch-Kincade grade calculates the reading level (US school grade level) at which the reader can comprehend at least 50 % of the document.…”
Section: Discussionmentioning
confidence: 99%
“…Another common type of document quality approximations are content-based. These are numerous and diverse, including for instance, ratios of information-to-noise, of stopwords per document, or of document words per stopword list [4,46,47]; average term length per document [17]; term part-ofspeech [25,26]; ratio of technical terminology per (scientific) document [20]; ratio of non-compositional phrases per document [29]; syllable, term and/or sentence statistics [37] as per standard readability indices [8,15,19,27,28]; discourse structure [24]; document entropy computed from terms [4] or discourse entities [34]. The lexical or syntactic features used in the above content-based document quality approximations are assumed to indicate syntactic or semantic difficulty.…”
Section: Related Workmentioning
confidence: 99%
“…The above approaches are developed stand-alone, not as an integral part of IR systems. However, methods for potentially integrating such text quality metrics into IR systems abound, for instance when text quality is interpreted as ratios of (combinations of) stopwords over content words per document [4,64,65]; term length [27] or part-of-speech (for ranking [38,41], but also for index pruning [38]); technical [33] or ambiguous scientific terminology [36]; non-compositional multiword expressions [39,47]; document readability [9,19,29,43,44]; document discourse [37] or coherence [52,40].…”
Section: Related Workmentioning
confidence: 99%