Abstract:In this paper, a new improved smoothing Newton algorithm for the nonlinear complementarity problem was proposed. This method has two-fold advantages. First, compared with the classical smoothing Newton method, our proposed method needn’t nonsingular of the smoothing approximation function; second, the method also inherits the advantage of the classical smoothing Newton method, it only needs to solve one linear system of equations at each iteration. Without the need of strict complementarity conditions and the … Show more
“…The first, a nonsmooth quasi-newton method proposed in [5] which we call Algorithm 2. The second, a smoothing Jacobian method proposed in [18] which, unlike our proposal, uses a smoothing of the Fischer function (Algorithm 1 with λ = 2), which Algorithm 3 and, the third, a smooth Newton method proposed recently in [32], which we call Algorithm 4. We vary λ in two forms obtaining two versions of our algorithm, namely, Method 1: we use the dynamic choice of λ used in [5], (this strategy combines the efficiency of Fisher function far from the solution with that of the minimum function near to it), Method 2: we vary randomly λ in the interval (0, 4) .…”
Section: Numerical Resultsmentioning
confidence: 99%
“…For the numerical test, we consider nine complementarity problems associated with the functions Kojima-Shindo (Koj-Shi), Kojima-Josephy (Koj-Jo), Mathiesen modificado (Math mod), Mathiesen (Mathiesen) Billups (Billups) [7], [25]; Nash-Cournot (Nash-Co) [16], Hock-Schittkowski (HH 66 ) [32], Geiger-Kanzow (Geiger-Kanzow) [15], Ahn (Ahn) [2]. We implemented Algorithms 1 (with Methods 1 and 2) and the test functions in MATLAB and use the following starting points taking from [5], [32],…”
Section: Numerical Resultsmentioning
confidence: 99%
“…We developed their convergence theory and did numerical tests to analyze their global and local performance. Our proposal presents some advantages in terms of global convergence compared to other methods as those proposed in [5], [18], and [32].…”
In this paper, we use the smoothing Jacobian strategy to propose a new algorithm for solving complementarity problems based on its reformulation as a nonsmooth system of equations. This algorithm can be seen as a generalization of the one proposed in [18]. We develop its global convergence theory and under certain assumptions, we demonstrate that the proposed algorithm converges locally and, q-superlinearly or q-quadratically to a solution of the problem. Some numerical experiments show a good performance of this algorithm.
“…The first, a nonsmooth quasi-newton method proposed in [5] which we call Algorithm 2. The second, a smoothing Jacobian method proposed in [18] which, unlike our proposal, uses a smoothing of the Fischer function (Algorithm 1 with λ = 2), which Algorithm 3 and, the third, a smooth Newton method proposed recently in [32], which we call Algorithm 4. We vary λ in two forms obtaining two versions of our algorithm, namely, Method 1: we use the dynamic choice of λ used in [5], (this strategy combines the efficiency of Fisher function far from the solution with that of the minimum function near to it), Method 2: we vary randomly λ in the interval (0, 4) .…”
Section: Numerical Resultsmentioning
confidence: 99%
“…For the numerical test, we consider nine complementarity problems associated with the functions Kojima-Shindo (Koj-Shi), Kojima-Josephy (Koj-Jo), Mathiesen modificado (Math mod), Mathiesen (Mathiesen) Billups (Billups) [7], [25]; Nash-Cournot (Nash-Co) [16], Hock-Schittkowski (HH 66 ) [32], Geiger-Kanzow (Geiger-Kanzow) [15], Ahn (Ahn) [2]. We implemented Algorithms 1 (with Methods 1 and 2) and the test functions in MATLAB and use the following starting points taking from [5], [32],…”
Section: Numerical Resultsmentioning
confidence: 99%
“…We developed their convergence theory and did numerical tests to analyze their global and local performance. Our proposal presents some advantages in terms of global convergence compared to other methods as those proposed in [5], [18], and [32].…”
In this paper, we use the smoothing Jacobian strategy to propose a new algorithm for solving complementarity problems based on its reformulation as a nonsmooth system of equations. This algorithm can be seen as a generalization of the one proposed in [18]. We develop its global convergence theory and under certain assumptions, we demonstrate that the proposed algorithm converges locally and, q-superlinearly or q-quadratically to a solution of the problem. Some numerical experiments show a good performance of this algorithm.
“…This results in the fact that the reconstruction by using TGV regularization can preserve edges while suppressing staircase effect. In order to solve the TGV model efficiently, many optimization algorithms have been proposed, such as Newton's method, split Bregman method, alternating direction method of multipliers, and gradient descent method [35][36][37][38][39][40][41][42][43]. Experiments show that TGV has the superior performance to TV based regularization models in image reconstruction.…”
It has been proved that total generalized variation (TGV) can better preserve edges while suppressing staircase effect. In this paper, we propose an effective hybrid regularization model based on second-order TGV and wavelet frame. The proposed model inherits the advantages of TGV regularization and wavelet frame regularization, can eliminate staircase effect while protecting the sharp edge, and simultaneously has good capability of sparsely estimating the piecewise smooth functions. The alternative direction method of multiplier (ADMM) is employed to solve the new model. Numerical results show that our proposed model can preserve more details and get higher image visual quality than some current state-of-the-art methods.
“…On the other hand, the investigation of the regularity of discrete maximal operators has also attracted the attention of many authors (cf. [2,5,7,18,21,24,27,29,32,36,37]). Let us recall some definitions and background.…”
Given $m\geq 1$
m
≥
1
, $0\leq \lambda \leq 1$
0
≤
λ
≤
1
, and a discrete vector-valued function $\vec{f}=(f_{1},\ldots,f_{m})$
f
→
=
(
f
1
,
…
,
f
m
)
with each $f_{j}:\mathbb{Z} ^{d}\rightarrow \mathbb{R}$
f
j
:
Z
d
→
R
, we consider the discrete multilinear fractional nontangential maximal operator
$$ \mathrm{M}_{\alpha,\mathcal{B}}^{\lambda }(\vec{f}) (\vec{n})=\mathop{\sup_{r>0, \vec{x}\in \mathbb{R}^{d}}}_{ \vert \vec{n}-\vec{x} \vert \leq \lambda r}\frac{1}{N(B _{r}(\vec{x}))^{m-\frac{\alpha }{d}}} \prod _{j=1}^{m}\sum_{\vec{k}\in B_{r}(\vec{x})\cap \mathbb{Z}^{d}} \bigl\vert f_{j}(\vec{k}) \bigr\vert , $$
M
α
,
B
λ
(
f
→
)
(
n
→
)
=
sup
r
>
0
,
x
→
∈
R
d
|
n
→
−
x
→
|
≤
λ
r
1
N
(
B
r
(
x
→
)
)
m
−
α
d
∏
j
=
1
m
∑
k
→
∈
B
r
(
x
→
)
∩
Z
d
|
f
j
(
k
→
)
|
,
where $\mathcal{B}$
B
is the collection of all open balls $B\subset \mathbb{R}^{d}$
B
⊂
R
d
, $B_{r}(\vec{x})$
B
r
(
x
→
)
is the open ball in $\mathbb{R}^{d}$
R
d
centered at $\vec{x}\in \mathbb{R}^{d}$
x
→
∈
R
d
with radius r, and $N(B_{r}(\vec{x}))$
N
(
B
r
(
x
→
)
)
is the number of lattice points in the set $B_{r}(\vec{x})$
B
r
(
x
→
)
. We show that the operator $\vec{f}\mapsto |\nabla \mathrm{M}_{\alpha, \mathcal{B}}^{\lambda }(\vec{f})|$
f
→
↦
|
∇
M
α
,
B
λ
(
f
→
)
|
is bounded and continuous from $\ell ^{1}(\mathbb{Z}^{d})\times \ell ^{1}(\mathbb{Z} ^{d})\times \cdots \times \ell ^{1}(\mathbb{Z}^{d})$
ℓ
1
(
Z
d
)
×
ℓ
1
(
Z
d
)
×
⋯
×
ℓ
1
(
Z
d
)
to $\ell ^{q}(\mathbb{Z} ^{d})$
ℓ
q
(
Z
d
)
if $0\leq \alpha < md$
0
≤
α
<
m
d
and $q\geq 1$
q
≥
1
such that $q>\frac{d}{md- \alpha +1}$
q
>
d
m
d
−
α
+
1
. We also prove that the same result also holds for the discrete multilinear fractional nontangential maximal operators associated with cubes. These results we obtained represent significant and natural extensions of what was known previously.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.