2018
DOI: 10.13064/ksss.2018.10.1.033
|View full text |Cite
|
Sign up to set email alerts
|

Multi-resolution DenseNet based acoustic models for reverberant speech recognition

Abstract: Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…In order to solve the above problems, this study proposed a student fatigue state evaluation method based on the quantum particle swarm optimization artificial neural network. e proposed method improves Dense-Net [19] by reducing the number of redundant connections, so as to fully reflect low-level local detail feature information. Finally, QPSO [20] is used to optimize the DenseNet structure and increase the number of hyperparameters, which makes the optimization of network structure more automatic and solves the uncertainty problem of artificial selection.…”
Section: Introductionmentioning
confidence: 99%
“…In order to solve the above problems, this study proposed a student fatigue state evaluation method based on the quantum particle swarm optimization artificial neural network. e proposed method improves Dense-Net [19] by reducing the number of redundant connections, so as to fully reflect low-level local detail feature information. Finally, QPSO [20] is used to optimize the DenseNet structure and increase the number of hyperparameters, which makes the optimization of network structure more automatic and solves the uncertainty problem of artificial selection.…”
Section: Introductionmentioning
confidence: 99%
“…DenseNet is a convolutional neural network with dense connections, which combines the advantages of ResNet and Highway to solve the gradient vanishing problem in deep network (Park et al, 2018 ). The idea of the DenseNet is to ensure the maximum information transfer between the middle and layers of the network, thus it can directly connect all layers.…”
Section: Modified Densenet Architecturementioning
confidence: 99%
“…Moreover, the structural design that enhances feature propagation and feature reuse can greatly reduce the number of parameters. DenseNet has been widely used in semantic cutting [30], speech recognition [31] and image classification [29].…”
Section: Dense Convolutional Networkmentioning
confidence: 99%
“…where x is the input, y is the output and a is a constant between 0 and 1. Finally, based on the researches in [31,33,34], it is shown that deep networks and multi-level connections can improve the performance of algorithm. Therefore, we use RDB instead of Residual Block (RB) which is used in SRGAN as the basic network element.…”
Section: Convmentioning
confidence: 99%