This report describes a system we have developed
Percentages correct for various network architectures using a single network. ... 6.2 Test results using king shifting and registration combinations 6.3 Test results for separating arches and tented arches 6.4 Test results for separating arches and tented arches from a four class net V
Statistically based ranked retrieval of records using keywords provides many advantages over traditional Boolean retrieval methods, especially for end users. This approach to retrieval, however, has not seen widespread use in large operational retrieval systems. To show the feasibility of this retrieval methodology, research was done to produce very fast search techniques using these ranking algorithms, and then to test the results against large databases with many end users. The results show not only response times on the order of 1 and l/2 seconds for 806 megabytes of text, but also very favorable user reaction. Novice users were able to consistently obtain good search results after 5 minutes of training. Additional work was done to devise new indexing techniques to create inverted files for large databases using a minicomputer. These techniques use no sorting, require a working space of only about 20% of the size of the input text, and produce indices that are about 14% of the input text size.
In this paper we evaluate the classi cation accuracy of four statistical and three neural network classi ers for two image based pattern classi cation problems. These are ngerprint classi cation and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Lo eve (K-L) transform of the images is used to generate the input feature set. Similarly for the ngerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classi ers used were Euclidean minimum distance, quadratic minimum distance, normal, and k-nearest neighbor. The neural network classi ers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The ngerprint data consisted of 2,000 training and 2,000 testing images. In addition to evaluation for accuracy, the multilayer perceptron and radial basis function networks were evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem was provided by the probabilistic neural network, where the minimum classi cation error was 2.5% for OCR and 7.2% for ngerprints.
13 3 . 1 mis2evt -computing eigenvector basis functions 1 3 3.2 mis2patl -generating patterns for the PNN classifier 15 3.3 hsfsyslrunning the updated version of the origmal NIST system 17 3.4 mis2pat2 -generating patterns for training the MLP classifier 19 3.5 trainregtraining to register a new form 3.6 hsfsys2 -rurming the new NIST recogniticm system 4. ALGORITHMIC OVERVIEW OF NEW SYSTEM HSFSYS2 4.1 The Application 4.2 System Components 4.2.1 Batch Initialization; src/lib/hsf/run.c; init_run0 25 4.2.2 Load Form Image; src/libAmage/readrast.c; ReadBinaryRasterO 4.2.3 Register Form Image; src/lib/hsf/regform.c; genregformSO 4.2.4 Remove Form Box; src/lib/rmlineA'emove.c; rm_long_hori_lineO 4.2.5 Isolate Line(s) of Handprint; src/lib/phrase/phrasm^.c; phrases_from_mapO 4.2.6 Segment Text Line(s); src/lib/adseg/segchars.c; blobs2chars80 4.2.7 Normalize Characters; src/lib/hsf/norm8.c; norm_2nd_gen_blobls80 3 1 4.2.8 Extract Feature Vectors; src/lib/im/ld.c; kl_transformO 32 4.2.9 Classify Characters; src/lib/mlp/runmlp.c; mlphypsconsO 4.2. 10 Spell-Correct Text Line(s); src/lib/phrase/spellphr.c; spell_phrases_Rel20 32 4.2.11 Store Results; src/lib/fet/writefet.c; writefetfileQ 5. PERFORMANCE EVALUATION AND COMPARISONS 5.1 Accuracies and Error Rates 5.2 Error versus Rejection Rate 5.3 Timing and Memory Statistics 6. IMPROVEMENTS TO THE TEST-BED 6.1 Processing New Forms with the HSFSYS2 7. FINAL COMMENTS 8. REFERENCES A. TRAINING THE MULTI-LAYER PERCEPTRON (MLP) CLASSIFIER OFF-LINE A.1 Training and Testing Runs 48 A.2 Specification (Spec) File A.2.1 String (Filename) Farms 49 A.2.2 Integer Farms A.2.3 Floating-Foint Farms 1 A.2.4 Switch Farms A.3 Training the MLF in hsfsys2 A.4 Explanation of the ou^ut produced dming MLF training A.4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.