Human echolocation is the ability of an individual (which is often a blind person) to use his/her signal such as sound from tongue clicks to perceive the surrounding. Basically this requires the person to listen and analyse to the return echo of the tongue clicks. The main characteristics of the tongue click signal waveform have been reported, however the fundamental principle on a person's ability to identify his/her own signal is still vague. The possible detection mechanism of the tongue click signal waveform used in human echolocation technique is discussed and imitated it as artificial detection system. The proposed mechanism which is based on human hearing process in synthesising the signal illustrates that the detection performance is improved as compared to the detection performance by the traditional matched filtering technique. The findings of this Letter create new potential for the development of any artificial human echolocator system, sensor systems like radar and sonar as well as applications inspired by human echolocation miracles.
Human echolocation is a biological process wherein the human emits a punctuated acoustic signal, and the ear analyzes the echo in order to perceive the surroundings. The peculiar acoustic signal is normally produced by clicking inside the mouth. This paper utilized this unique acoustic signal from a human echolocator as a source of transmitted signal in a synthetic human echolocation technique. Thus, the aim of the paper was to extract information from the echo signal and develop a classification scheme to identify signals reflected from different textures at various distance. The scheme was based on spectral entropy extracted from Mel-scale filtering output in the Mel-frequency cepstrum coefficient of a reflected echo signal. The classification process involved data mining, features extraction, clustering, and classifier validation. The reflected echo signals were obtained via an experimental setup resembling a human echolocation scenario, configured for synthetic data collection. Unlike in typical speech signals, extracted entropy from the formant characteristics was likely not visible for the human mouth-click signals. Instead, multiple peak spectral features derived from the synthesis signal of the mouth-click were assumed as the entropy obtained from the Mel-scale filtering output. To realize the classification process, K-means clustering and K-nearest neighbor processes were employed. Moreover, the impacts of sound propagation toward the extracted spectral entropy used in the classification outcome were also investigated. The outcomes of the classifier performance herein indicated that spectral entropy is essential for human echolocation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.