2020
DOI: 10.1016/j.neucom.2020.02.100
|View full text |Cite
|
Sign up to set email alerts
|

Robust ellipse fitting based on Lagrange programming neural network and locally competitive algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…The LPNN technique [42] can be used in many applications, such as locating a target in a radar system [9] and 1 -norm-based sparse recovery [8], and ellipse fitting [43]. It was developed for solving a general non-linear constrained optimization problem:…”
Section: Lpnnmentioning
confidence: 99%
“…The LPNN technique [42] can be used in many applications, such as locating a target in a radar system [9] and 1 -norm-based sparse recovery [8], and ellipse fitting [43]. It was developed for solving a general non-linear constrained optimization problem:…”
Section: Lpnnmentioning
confidence: 99%
“…This entails different types of field data, as illustrated in Table II, where data mining techniques are applied for the analysis of real-time monitoring and acquisition of prognostics as well as diagnosis data for measuring operational performance linked to usage profile. Data are continuously collected for performance monitoring according to a set of precursors for an individual type of device [22]. In contrast, intermittent fault detection that addresses issues related to faults that often occur randomly is assessed with Mahalanobis distance (MD) technique for univariate data reduction [23].…”
Section: Field Datamentioning
confidence: 99%
“…Although the performance has been improved to some extent, the inherit disadvantage of ℓ 2 norm-based methods is that the square term will enlarge the influence of outliers and lead to a rough result. To overcome this shortcoming, the ℓ 1 [12] and ℓ 0 norms [13] are cited to address this issue; theoretically, it turns to be less sensitive to outliers for algorithms based on ℓ p -norm (0 < p < 2), and the selection of iterative initial value is still pending yet. Given the advantage of ℓ 1 -norm and direct algebraic distance minimizing strategy, a natural way is to replace the ℓ 2 item with ℓ 1 in Fitzgibbon's model thus comes out our ℓ 1 model, and we will explore its efficacy in the following sections.…”
Section: Introductionmentioning
confidence: 99%