This paper presents a novel idea that automatically identifies hearing impairment based on a cognitively-inspired feature extraction and speech recognition approach. To the best of our knowledge, this is a first attempt to automate pure tone and speech audiometry testing. Background: Hearing loss, a partial or total inability to hear, is one of the most commonly reported disabilities. A hearing test can be carried out by the audiologist to assess the patient's auditory system. However, this procedure normally requires an appointment with the audiologist, with potentially long delays and a practitioner fee. In addition, other possible problems include unavailability of required equipment and qualified practitioners, particularly in remote areas. Methods: In the proposed method, the user is asked to repeat words uttered by the machine. User response is first captured through the speech signal, and the system identifies right and wrong guesses uttered by the user, to generate an audiogram and speech recognition threshold automatically. The proposed system uses an adaptive filterbank with weighted Mel frequency cepstral coefficients (MFCCs) for feature extraction. The adaptive filterbank implementation is inspired by the principle of spectrum sensing in cognitive radio that is aware of its environment and adapts to statistical variations in the input stimuli by learning from the environment. Contrary to the state-of-the-art static MFCCs, the cognitive feature extraction method senses the spectrum in order to design the adaptive filterbank of relevant frequency bands. A number of machine learning based classification algorithms are finally employed, including the support vector machine (SVM), k-nearest neighbors (k-NN), Adaptive Boosting (AdaBoost), and Hidden Markov Model (HMM). Results: Comparative performance evaluation demonstrates the potential of our automated hearing test method to achieve comparable results to the clinical ground truth, established by the expert audiologist's tests. The overall absolute error of the proposed model when compared with the expert audiologist test is less than 4.9 dB and 4.4 dB for the pure tone and speech audiometry tests respectively, achieving accuracy up to 96.67% using HMM. Conclusion: The proposed method could potentially offer a second opinion to audiologists, and serve as a cost-effective pre-screening test to predict hearing loss at an early stage. Currently, we are exploring the application of advanced deep learning and optimization approaches to further enhance the performance of the automated testing prototype. Noname manuscript No.