2017 18th International Conference on Electronic Packaging Technology (ICEPT) 2017
DOI: 10.1109/icept.2017.8046759
|View full text |Cite
|
Sign up to set email alerts
|

A wearable bone-conducted speech enhancement system for strong background noises

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 3 publications
0
4
0
Order By: Relevance
“…To overcome this a wearable bone conducted speech system was implemented and signals were collected from various skull positions which have very high frequency components and compared with air conducted speech signals. According to Boyan Haung et al [22], in order to enhance the Bone and Air Conducted speech signal, employing FIR filter alone may not shows better performance, using trigonometric expansions of FLANN filter combined together be the better signal enhance. The proposed scheme have smaller MSE with recovered speech signal and clean speech signals this shown in Table 2.2.…”
Section: Brief Review On Approachesmentioning
confidence: 99%
“…To overcome this a wearable bone conducted speech system was implemented and signals were collected from various skull positions which have very high frequency components and compared with air conducted speech signals. According to Boyan Haung et al [22], in order to enhance the Bone and Air Conducted speech signal, employing FIR filter alone may not shows better performance, using trigonometric expansions of FLANN filter combined together be the better signal enhance. The proposed scheme have smaller MSE with recovered speech signal and clean speech signals this shown in Table 2.2.…”
Section: Brief Review On Approachesmentioning
confidence: 99%
“…The first one is to explore the non-linear mapping of BC speech signals to AC speech signals [ 14 , 15 , 16 , 17 ]. Recently, a deep neural network was used to map the spectral coefficients of the linear prediction coding of BC speech to the coefficients of AC speech [ 18 ]. Liu et al utilized a deep noise reduction autoencoder to achieve the abovementioned mapping [ 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…In early algorithms [10]- [13], transmission channel functions were represented by low dimensional spectral envelope features, Gaussian Mixture Models (GMM) and shallow neural networks were often employed to learn the mapping relationship. Recently, deep neural networks (DNN) have been used to learn the complex nonlinear mapping relationship [14], [15], and some researchers start to use high-dimensional features to represent the difference of the two speech. For instance, deep denoising autoencoder is used to map the high dimensional Mel magnitude spectra of the two speech and achieves considerable improvements [16].…”
Section: Introductionmentioning
confidence: 99%