2021
DOI: 10.1109/access.2021.3052102
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix

Abstract: In this paper, we propose an efficient algorithm to accelerate the existing Broad Learning System (BLS) algorithm for new added nodes. The existing BLS algorithm computes the output weights from the pseudoinverse with the ridge regression approximation, and updates the pseudoinverse iteratively. As a comparison, the proposed BLS algorithm computes the output weights from the inverse Cholesky factor of the Hermitian matrix in the calculation of the pseudoinverse, and updates the inverse Cholesky factor efficien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Moreover, a foreseeable challenge in integrating multifeatures into Word-unit BLS is the substantial increase in training time. One potential solution to this challenge is the Zhu method [46], which reduces computational complexity by utilizing the inverse Cholesky factor of the Hermitian matrix during pseudoinverse computation and factor updates. Despite the limitations of the study, this paper introduced a novel framework that integrates BLS with linguistic theory.…”
Section: B Limitationsmentioning
confidence: 99%
“…Moreover, a foreseeable challenge in integrating multifeatures into Word-unit BLS is the substantial increase in training time. One potential solution to this challenge is the Zhu method [46], which reduces computational complexity by utilizing the inverse Cholesky factor of the Hermitian matrix during pseudoinverse computation and factor updates. Despite the limitations of the study, this paper introduced a novel framework that integrates BLS with linguistic theory.…”
Section: B Limitationsmentioning
confidence: 99%
“…To reduce the computational complexity, the recursive BLS algorithm and the square-root BLS algorithm proposed in [12] compute the output weights from the inverse and the inverse Cholesky factor of the Hermitian matrix in the ridge inverse, respectively, which are usually smaller than the ridge inverse. On the other hand, the square-root BLS algorithms on added nodes have been proposed in [13], [14] to improve the original BLS on added nodes in [7]. The square-root BLS algorithm proposed in [14] still computes the generalized inverse solution for the output weights, while in [13], the square-root BLS algorithm has been proposed to compute the ridge solution for the output weights from the inverse Cholesky factor of the Hermitian matrix in the ridge inverse.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the square-root BLS algorithms on added nodes have been proposed in [13], [14] to improve the original BLS on added nodes in [7]. The square-root BLS algorithm proposed in [14] still computes the generalized inverse solution for the output weights, while in [13], the square-root BLS algorithm has been proposed to compute the ridge solution for the output weights from the inverse Cholesky factor of the Hermitian matrix in the ridge inverse.…”
Section: Introductionmentioning
confidence: 99%