The use of statistical decision functions with computers for character recognition is investigated. The three eases considered are (1) where both the losses due to incorrect decisions and the a priori probability of the characters are known, (2) where the a priori probability is known, but the losses are not, and (3) the reverse of the second ease. For the first ease, Bayes decision functions are reviewed. A theorem about the advantage of using rejection is proved. For the second ease, minimum error and minimum rejection decision functions are defined and obtained. For the third ease, admissible, and a complete class of, decision functions are discussed. Illustrative ex'~mples are given.
Introd~telionThe use of statistical decision functions is one of the many possible approaches to the problem of character recognition by computer. Decision functions which minimize respectively the average risk and the probability of error anmng those whose probability of rejection is equal to a prescribed limit were considered by Chow [3]. Further progress has been made in the last few years (see, e.g. [2,4,5,7]). In this paper, we report some results concerning the above and similar types of optimal decision functions. The three eases considered are (1) where both the losses due to incorrect decisions and the a priori probability of the characters are known, (2) where the a priori probability is known, but the losses are not, and (3) the reverse of this ease.For the first ease Bayes decision functions which minimize the average risk are reviewed. A special ease is also discussed, where tile losses due to incorrect recognition and rejection (as unrecognizable) are independent of the characters (Equation 7). For that special ease an advantage due to the inclusion of rejection as a possible decision is proved. It is shown that the proportional reduction in Bayes risk, by such a provision, increases as the relative cost of rejection to error decreases (Theorem 1 ). This means that if the cost of rejection is 10w, then the use of rejection to dispose of unrecognizable characters could provide worthwhile savings. On the other hand, if the cost of rejection is not much smaller (or even greater) than that of making an error, Bayes decision functions will never exercise that. option of rejection (Section 2, Remark 2). A similar result is also obtained for the proportional reduction of a posteriori risk (Theorem 1'). For the second ease two types of optimal decision functions are introduced, as generalizations of Chow's work [3]. These are, namely, decision functions which minimize the probability of rejection (error) among those whose probabilities of error (rejection) do not exceed a prescribed limit a (Section 3, Definition 1). ~linimum rejection decision functions may well be of more practical importance, since the probability of making errors which often have more serious consequences