This study presents the results of two experiments investigating the nature of exhaustivity of pre-verbal focus in Hungarian, both doing so in an indirect way. Experiment 1 contrasts the responses given in long versus short time windows in a truth-value judgment task. Experiment 2 makes the task itself indirect and compares pre-verbal focus with three other types of focus in the same language. Through these multiple comparisons we provide evidence that exhaustivity in pre-verbal focus is not entailed, unlike exhaustivity in clefts, with which it has been treated as being on a par. Instead, it is due to pragmatic implicature, in particular, conventional implicature.
The GEIG metric for quantifying the accuracy of parsing became influential through the Parseval programme, but many researchers have seen it as unsatisfactory. The Leaf-Ancestor (LA) metric, first developed in the 1980s, arguably comes closer to formalizing our intuitive concept of relative parse accuracy. We support this claim via an experiment that contrasts the performance of alternative metrics on the same body of automatically-parsed examples. The LA metric has the further virtue of providing straightforward indications of the location of parsing errors.
For one aspect of grammatical annotation, part-of-speech tagging, we investigate experimentally whether the ceiling on accuracy stems from limits to the precision of tag definition or limits to analysts' ability to apply precise definitions, and we examine how analysts' performance is affected by alternative types of semi-automatic support. We find that, even for analysts very well-versed in a part-of-speech tagging scheme, human ability to conform to the scheme is a more serious constraint than precision of scheme definition. We also find that although semi-automatic techniques can greatly increase speed relative to manual tagging, they have little effect on accuracy, either positively (by suggesting valid candidate tags) or negatively (by lending an appearance of authority to incorrect tag assignments). On the other hand, it emerges that there are large differences between individual analysts with respect to usability of particular types of semi-automatic support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.