This paper reports the use of response surface model (RSM) and reinforcement learning (RL) to solve the travelling salesman problem (TSP). In contrast to heuristically approaches to estimate the parameters of RL, the method proposed here allows a systematic estimation of the learning rate and the discount factor parameters.The Q-learning and SARSA algorithms were applied to standard problems from the TSPLIB library. Computational results demonstrate that the use of RSM is capable of producing better solutions to both symmetric and asymmetric tests of TSP.
In this paper, we present a technique to tune the reinforcement learning (RL) parameters applied to the sequential ordering problem (SOP) using the Scott-Knott method. The RL has been widely recognized as a powerful tool for combinatorial optimization problems, such as travelling salesman and multidimensional knapsack problems. It seems, however, that less attention has been paid to solve the SOP. Here, we have developed a RL structure to solve the SOP that can partially fill that gap. Two traditional RL algorithms, Q-learning and SARSA, have been employed. Three learning specifications have been adopted to analyze the performance of the RL: algorithm type, reinforcement learning function, and parameter. A complete factorial experiment and the Scott-Knott method are used to find the best combination of factor levels, when the source of variation is statistically different in analysis of variance. The performance of the proposed RL has been tested using benchmarks from the TSPLIB library. In general, the selected parameters indicate that SARSA overwhelms the performance of Q-learning.
Resumo: Nos algoritmos de aprendizado por reforço, a taxa de aprendizado (α) e o fator de desconto (γ) podem ser definidos entre qualquer valor no intervalo entre 0 e 1. Assim, adotando os conceitos de regressão logística, é proposta uma metodologia estatística para a análise da influência da variação de α e γ nos algoritmos Q-learning e SARSA. Como estudo de caso, o aprendizado por reforço foi aplicado em experimentos de navegação autônoma. A análise de resultados mostrou que simples variações em α e γ podem interferir diretamente no desempenho do aprendizado por reforço.
Palavras
IntroduçãoA técnica de aprendizado por reforço (AR) é amplamente aplicada na robótica para resolução de diferentes problemas e situações [1]. O objetivo do AR é fazer com que um agente possa aprender a tomar decisões a partir de experiências de sucesso e fracasso no ambiente.
Resumo: Neste trabalho, o objetivo é analisar o desempenho do Aprendizado por Reforço na solução do Problema da Mochila Multidimensional. Para isso, é proposto um modelo de Aprendizado por Reforço estruturado em estados, ações e recompensas. Além disso, os experimentos computacionais apresentados permitem analisar a sensibilidade dos parâmetros do algoritmo Q-learning na resolução desse tipo de problema de otimização combinatória.
Palavras-chave:Aprendizado por Reforço. Otimização Combinatória. Problema da Mochila Multidimensional.Abstract: In this work, the goal is to analyze the performance of Reinforcement Learning in solving the Multidimensional Knapsack Problem. For this, a Reinforcement Learning model structured in states, actions and rewards is proposed. In addition, the computational experiments presented allow us to analyze the sensitivity of the parameters of the Q-learning algorithm in solving this type of combinatorial optimization problem.
The traveling salesman problem (TSP) is one of the best-known combinatorial optimization problems. Many methods derived from TSP have been applied to study autonomous vehicle route planning with fuel constraints. Nevertheless, less attention has been paid to reinforcement learning (RL) as a potential method to solve refueling problems. This paper employs RL to solve the traveling salesman problem With refueling (TSPWR). The technique proposes a model (actions, states, reinforcements) and RL-TSPWR algorithm. Focus is given on the analysis of RL parameters and on the refueling influence in route learning optimization of fuel cost. Two RL algorithms: Q-learning and SARSA are compared. In addition, RL parameter estimation is performed by Response Surface Methodology, Analysis of Variance and Tukey Test. The proposed method achieves the best solution in 15 out of 16 case studies.
Deep Learning methods have important applications in the building construction image classification field. One challenge of this application is Convolutional Neural Networks adoption in a small datasets. This paper proposes a rigorous methodology for tuning of Data Augmentation hyperparameters in Deep Learning to building construction image classification, especially to vegetation recognition in facades and roofs structure analysis. In order to do that, Logistic Regression models were used to analyze the performance of Convolutional Neural Networks trained from 128 combinations of transformations in the images. Experiments were carried out with three architectures of Deep Learning from the literature using the Keras library. The results show that the recommended configuration (Height Shift Range = 0.2; Width Shift Range = 0.2; Zoom Range =0.2) reached an accuracy of
in the test step of first case study. In addition, the hyperparameters recommended by proposed method also achieved the best test results for second case study:
.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.