2020
DOI: 10.1109/access.2020.3016989
|View full text |Cite
|
Sign up to set email alerts
|

Minimum BER Criterion and Adaptive Moment Estimation Based Enhanced ICA for Wireless Communications

Abstract: This paper concentrates on investigating an enhanced independent component analysis (ICA) method for blind separation of signals corrupted by noise in wireless communications. Because of the traditional classical ICA methods that always have an inadequate capacity of anti-noise or insufficient separable ability in noise circumstance without satisfying practical application requirements. For this reason, two mechanisms are conducted to establish the modified cost function and fulfill the optimization assignment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 21 publications
(35 reference statements)
0
4
0
Order By: Relevance
“…As an optimizer, we utilized adaptive moment estimation (ADAM), which combines momentum and root mean square propagation (RMSprop). Momentum remembers the direction of past movements by adding a certain value to the calculated gradient, while RMSProp [34,35] uses an exponential moving average to give more weight to the most recent gradients rather than simply accumulating them. The total number of parameters in the entire model was 3,340,674.…”
Section: Behavior Recognition Learning Modelmentioning
confidence: 99%
“…As an optimizer, we utilized adaptive moment estimation (ADAM), which combines momentum and root mean square propagation (RMSprop). Momentum remembers the direction of past movements by adding a certain value to the calculated gradient, while RMSProp [34,35] uses an exponential moving average to give more weight to the most recent gradients rather than simply accumulating them. The total number of parameters in the entire model was 3,340,674.…”
Section: Behavior Recognition Learning Modelmentioning
confidence: 99%
“…According to the behavior of the convergence curves, a good adaptation of the networks to the data is observed as the training progresses, resulting in a profile of gradual loss to the validation set, therefore, with adequate learning rates, without presenting overfitting, with fast training that lasted 200 computational epochs (average of 200 ms/epoch). These validation factors were possible due to adaptive gradient optimization, which has advantages, such as a low memory requirement during gradient formation [34], high computational efficiency [34,37], and simple and straightforward implementation [37,38]. The main validation parameters of the proposed models are summarized in Table 1.…”
Section: Prediction Of Retention Times Through Deep Learning Modelsmentioning
confidence: 99%
“…When the loss function is smallest, the network optimal parameter update is achieved through the gradient information of the network backpropagation. Adaptive moment estimation (Adam) [27] is a parameter adaptive learning rate method based on first-order moment estimation and second-order moment estimation of the gradient to dynamically adjust the learning rate of each parameter, expressed by…”
Section: Layer Typementioning
confidence: 99%