2023
DOI: 10.3934/mfc.2021042
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Kantorovich modifications of positive linear operators

Abstract: <p style='text-indent:20px;'>Starting with a positive linear operator we apply the Kantorovich modification and a related modification. The resulting operators are investigated. We are interested in the eigenstructure, Voronovskaya formula, the induced generalized convexity, invariant measures and iterates. Some known results from the literature are extended.</p>

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 12 publications
(13 reference statements)
0
0
0
Order By: Relevance
“…. , m, and compute successively x (1) , x (2) , • • • , using (17) or alternatively (18). So, we get the sequence (x (k) ) k≥0 .…”
Section: Consider the System Of Equationsmentioning
confidence: 99%
See 1 more Smart Citation
“…. , m, and compute successively x (1) , x (2) , • • • , using (17) or alternatively (18). So, we get the sequence (x (k) ) k≥0 .…”
Section: Consider the System Of Equationsmentioning
confidence: 99%
“…Similar studies involve modifications of classical positive linear operators. The sequence of Kantorovich operators is a prominent example of modification of classical Bernstein operators, see e.g., [1], [9], [11], [18] and the references therein. This paper is devoted to such problems for the Kantorovich modifications of linking operators and Stancu modifications of Bernstein operators.…”
mentioning
confidence: 99%
“…Certainly, the operators Q β n , having an integral form, are also suitable for L papproximation, 1 ≤ p < ∞. Actually, some operators of discrete type have been appropriately modified in order to approximate functions in L p (see, for instance, [2], [14], and [17], among others). The corresponding results have potential applications in machine learning generalization analysis (cf.…”
mentioning
confidence: 99%
“…Satisfy the reduction formula(Sharma, 2016;Acu and Agrawal, 2019;Acu and Tachev, 2021;Acu et al, 2023;Adell and Cárdenas-Morales, 2022 ) 𝑆(𝑣, 𝑛, 𝑥, 𝑦) = 𝑥𝑆(𝜈 − 1, 𝑛, 𝑥, 𝑦) + 𝑛𝛽𝑆(𝑣, 𝑛 − 1, 𝑥 + 𝛽, 𝑦) By repeated use of the reduction formula, we can show that 𝑆(1, 𝑛, 𝑥, 𝑦) = ∑ 𝑛 𝑘=0 ( 𝑛 𝑣 ) 𝑣! 𝛽 𝑘 (𝑥 + 𝑦 + 𝑛𝛽) 𝑛−𝑣 as 𝑥𝑆(0, 𝑛, 𝑥, 𝑦) = (𝑥 + 𝑦 + 𝑛𝛽) 𝑛…”
mentioning
confidence: 99%