IntroductionAn application that takes in speech, analyzes it, and turns it into a Curriculum Vitae is an innovative approach. This study aims to create such application using text-to-speech, speech recognition, and natural language processing. The user is asked about his/her personal and educational background in an orderly manner using a text-to-speech module. The order of the questions depends on the format of the Curriculum Vitae.With the SpeechRecognition python library, the user's speech input is captured and converted into text which will be further processed and analyzed. Word tokenization is used to divide a sentence into list of words. Each element in this list of words is labeled with their corresponding part-of-speech tag. These tagged words will then make it easier for the program to identify the actual user input, which will be written in a Word or LibreOffice document (.docx, .doc, .odt) using the pythondocx library. Using this approach, the study was successful in creating such application. Speech-to-text, speech recognition, and natural language processing can be integrated to create a virtual human interviewer capable of conversing with the user to create a Curriculum Vitae. Although the tools used proved to be very effective and efficient, the python-docx library does not provide a tool for editing written text in the event of the user giving a wrong input, creating an area for future research.Keywords: speech recognition, natural language processing, Curriculum Vitae, virtual human agent.+ Both authors have equal contribution.Handsfree technology is popular nowadays for its convenience. People are becoming more comfortable in talking to their devices in order for it to do certain tasks. Speech recognition and natural language processing can be integrated to create an application capable of understanding and interacting with its users. The Virtual Human Toolkit (1) made it possible for an easy production and development of virtual human agents that interacts with users. This study integrated different aspects of human-to-human interaction, which includes: speech recognition, natural language understanding, nonverbal behavior understanding, natural language generation, and nonverbal behavior generation. The addition of nonverbal behavior understanding and generation enhances the face-to-face interaction between human and computer.This study is significant because it aims to shorten the time of filling up a Curriculum Vitae. Speaking, instead of writing, would make the process faster. Also, due to the limitation of the Speech Recognition software, words must be properly uttered for the System to rightly detect the input speech. As a result, non-native English speakers will have a chance to practice how to pronounce the words correctly. Another significance of this study is to create a framework for an automated speech to text form filler. This framework promotes more possibilities for future research.An example of an application that interacts with humans is SimSensei (2) which engages i...
Hair Analysis has been widely used in the field of forensics especially in finding out if a piece of hair evidence in a crime scene is considered valid. This study proposes a classification method for human head hair and animal hair using Probabilistic Neural Network. Hair samples were mounted in a compound microscope, digitized using a camera and stitched using method used by Rosebrock. Images were magnified 100 times and 400 times. Sobel edge detection was used for texture analysis. Gray-level cooccurrence matrix was applied to all samples to extract features. Results were then fed to a Probabilistic Neural Network for classification. The datasets were validated using the 2-fold cross validation, wherein 2 training sets and 2 test sets were made. An 84.615 percent accuracy was achieved for normalized data and a 100 percent accuracy was achieved for unnormalized data.
Medical Imaging is essential for medical diagnosis and treatment so there is a need to be able to obtain quantitative data from them for analysis and these data can be obtained through Image Processing. The data from the microscopic images could possibly contain information about concerning changes and variations in the forms of organisms that describe the relationship between the human body and disease. In this study, Color Segmentation was used to detect possible infarction locations by analyzing digital microscopic images of the cross section of cardiac muscle tissues and the Derivative of Gaussian (DoG) was used to stitch the selected microscopic images together. With these algorithms we were able to detect the possible locations of infarction and obtain their quantitative data and create an application for automated analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.