We propose a three-phase matheuristic, combining an exact method with a Variable Neighborhood Search local Branching (VNSB) to route a fleet of Electric Vehicles (EVs). EVs are allowed stopping at the recharging stations along their routes to (also partially) recharge their batteries. We hierarchically minimize the number of EVs used and the total time spent by the EVs, i.e., travel times, charging times and waiting times (due to the customer time windows). The first two phases are based on Mixed Integer Linear Programs to generate feasible solutions, used in a VNSB algorithm. Numerical results on benchmark instances show that the proposed approach finds good quality solutions in reasonable amount of time
The goal of this paper is to evaluate the outlier identification performance of iterative Data Snooping (IDS) and L1-norm in leveling networks by considering the redundancy of the network, number and size of the outliers. For this purpose, several Monte-Carlo experiments were conducted into three different leveling networks configurations. In addition, a new way to compare the results of IDS based on Least Squares (LS) residuals and robust estimators such as the L1-norm has also been developed and presented. Two different scenarios were considered in that comparison: (i) both IDS and L1-norm evaluated with the same threshold values; (ii) both IDS and L1-norm compared with the same false positive rates. In latter case, a Monte-Carlo approach was applied to control the false positive rates. The question of which of them performs better depends on the viewpoint.From the perspective of analysis only according to the success rate, it is shown that L1-norm performs better than IDS for the case of networks with low redundancy (𝑟̅ < 0.5), especially for cases where more than one outlier is present in the dataset. In the relationship between false positive rate and outlier identification success rate, however, IDS performs better than L1-norm. In that case, IDS with a critical value of 3.29 has the best cost-benefit ratio, independently of the levelling network configuration, number and size of outliers.
L1-norm adjustment corresponds to the minimization of the sum of weighted absolute residuals. Unlike Least Squares, it is a robust estimator, i.e., insensitive to outliers. In geodetic networks, the main application of L1-norm refers to the identification of outliers. There is no general analytical expression for its solution. Linear programming is the usual strategy, but it demands decorrelated observations. In the context of Least Squares, it is well known that the application of Cholesky factorization decorrelates observations without changing the results of the adjustment. However, there is no mathematical proof that this is valid for L1-norm. Besides that, another aspect on L1-norm is that equal weights may guarantee maximum robustness in practice. Therefore, it is expected to also provide a better effectiveness in the identification of outliers. This work presents contributions on two aspects concerning L1-norm adjustment of leveling networks, being them: the validity of Cholesky factorization for decorrelation of observations and the effectiveness for identification of outliers of a stochastic model with equal weights for observations. Two experiments were conducted in leveling networks simulated by the Monte Carlo method. In the first one, results indicate that the application of the factorization as previously performed in the literature seems inappropriate and needs further investigation. In the second experiment, comparisons were made between L1 with equal weights and L1 with weights proportional to the inverse of the length of the leveling line. Results show that the first approach was more effective for the identification of outliers. Therefore, it is an interesting alternative for the stochastic model in L1-norm adjustment. Besides providing a better performance in the identification of outliers, the need for observation decorrelation becomes irrelevant if equal weights are adopted.
Robust estimators are often lacking a closed-form expression for the computation of their residual covariance matrix. In fact, it is also a prerequisite to obtain critical values for normalized residuals. We present an approach based on Monte Carlo simulation to compute the residual covariance matrix and critical values for robust estimators. Although initially designed for robust estimators, the new approach can be extended for other adjustment procedures. In this sense, the proposal was applied to both well-known minimum L1-norm and least squares into three different leveling network geometries. The results show that (1) the covariance matrix of residuals changes along with the estimator; (2) critical values for minimum L1-norm based on a false positive rate cannot be derived from well-known test distributions; (3) in contrast to critical values for extreme normalized residuals in least squares, critical values for minimum L1-norm do not necessarily tend to be higher as network redundancy increases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.