The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2009 2nd IEEE International Conference on Computer Science and Information Technology 2009
DOI: 10.1109/iccsit.2009.5234726
|View full text |Cite
|
Sign up to set email alerts
|

New neural networks based on Taylor series and their research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 2 publications
0
5
0
Order By: Relevance
“…As to the TaylorNet, it is also worth mentioning that our proposal is very different from the previous work in the literature [91][92][93][94][95], which have also explored the Taylor series in neural networks. Most of them have specific motivations and work in different frameworks, and there is no explicit hierarchical structure employed there.…”
Section: Discussionmentioning
confidence: 95%
“…As to the TaylorNet, it is also worth mentioning that our proposal is very different from the previous work in the literature [91][92][93][94][95], which have also explored the Taylor series in neural networks. Most of them have specific motivations and work in different frameworks, and there is no explicit hierarchical structure employed there.…”
Section: Discussionmentioning
confidence: 95%
“…Take the Taylor series of the unary function as an example. In order to describe the finite Taylor series, considering the sum of the first + 1 terms, we can get formula (5).…”
Section: Neural Network Modelmentioning
confidence: 99%
“…Explicit polynomial fitting often uses Taylor series [5,6]. Taylor series are often used in the field of mathematics, especially in the research of approximate calculations.…”
Section: Introductionmentioning
confidence: 99%
“…As to the TaylorNet, it is also worth mentioning that our proposal is very different from the previous work in the literature, [91][92][93][94][95][96][97] which have also explored the Taylor series in neural networks. Most of them have specific motivations and work in different frameworks, and there is no explicit hierarchical structure employed there.…”
mentioning
confidence: 96%
“…Most of them have specific motivations and work in different frameworks, and there is no explicit hierarchical structure employed there. For example, Chen et al [91] used a single-layer neural network similar to Eq. ( 3) to approximate the Taylor expansion of a single-variable function.…”
mentioning
confidence: 99%