2019 International Conference on Information Science and Communications Technologies (ICISCT) 2019
DOI: 10.1109/icisct47635.2019.9011946
|View full text |Cite
|
Sign up to set email alerts
|

A Method of Mapping a Block of Main Memory to Cache in Parallel Processing of the Speech Signal

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…Novel methods, algorithms, and more modern applications are currently being developed and improved for the segmentation of speech signals and the calculation of parametric indicators of selected fragments, thereby creating a spectrogram of speech signals using spectral analysis [ 4 , 5 , 6 , 7 ]. Speaker identification with diversified voice clips across the globe is a crucial and challenging task, especially in extracting vigorous and discriminative features [ 8 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Novel methods, algorithms, and more modern applications are currently being developed and improved for the segmentation of speech signals and the calculation of parametric indicators of selected fragments, thereby creating a spectrogram of speech signals using spectral analysis [ 4 , 5 , 6 , 7 ]. Speaker identification with diversified voice clips across the globe is a crucial and challenging task, especially in extracting vigorous and discriminative features [ 8 ].…”
Section: Related Workmentioning
confidence: 99%
“…The TBB and OpenMP packages were used to create parallel algorithms for spectral transformations. In these methods, the command divides the signal into frames; for example, N = 16, 32, …, or 4096 for each frame [ 4 ]. The size of each created frame is equal to the block size of the cache memory because cache memory is a factor that affects the efficiency of parallel processing.…”
Section: Pre-processing: Materials Overviewmentioning
confidence: 99%
“…High-performance hardware is needed for deep learning algorithms that use huge datasets, such as heterogeneous computing systems (M. Rakhimov & M. Ochilov, 2021) or parallel computing techniques. At the moment, parallel and distributed computing technologies (M. Musaev & M. Rakhimov, 2019); (M. Rakhimov, D. Mamadjanov and A. Mukhiddinov, 2020) can also be used to overcome this issue. The major goal of this study is to choose significant parametric variables from the gathered disease data that produce more F1-score outcomes.…”
Section: Introductionmentioning
confidence: 99%