2022
DOI: 10.1609/aaai.v36i9.21214
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Solve Routing Problems via Distributionally Robust Optimization

Abstract: Recent deep models for solving routing problems always assume a single distribution of nodes for training, which severely impairs their cross-distribution generalization ability. In this paper, we exploit group distributionally robust optimization (group DRO) to tackle this issue, where we jointly optimize the weights for different groups of distributions and the parameters for the deep model in an interleaved manner during training. We also design a module based on convolutional neural network, which allows t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…This is essential to ensure the solver's practical applicability and usefulness in real-world scenarios. Besides variation in instance scales, we conduct zero-shot generalization tests on several distributions, including the basic uniform distribution, the "mixed" distribution, and the "clustered" distribution (Jiang et al 2022).…”
Section: Results On the Tspmentioning
confidence: 99%
See 1 more Smart Citation
“…This is essential to ensure the solver's practical applicability and usefulness in real-world scenarios. Besides variation in instance scales, we conduct zero-shot generalization tests on several distributions, including the basic uniform distribution, the "mixed" distribution, and the "clustered" distribution (Jiang et al 2022).…”
Section: Results On the Tspmentioning
confidence: 99%
“…We also use TSP instances sampled from different distributions in the testing process. These instances of TSP are with "mixed" distribution or "clustered" distribution, which are proposed and tested in Jiang et al (2022); Bi et al (2022). We employ these instances for testing to assess the generalization ability of models in solving TSP instances with different distributions.…”
Section: Problem Definitionmentioning
confidence: 99%
“…End-to-end models often deliver comparable solutions to learning-augmented models with reduced inference time and are more researched in the literature. Some of the above models are further enhanced in terms of the generalization across different distributions or sizes [31], [6], [9], [10], [32], [11], [12], [13], [14]. However, they simply rely on additional training on instances with manually specified distributions (e.g., Uniform, Gaussian, Diagonal distributions) or sizes (e.g., random numbers of nodes within [50,200]).…”
Section: A Deep Models For Vrpsmentioning
confidence: 99%
“…While conventional heuristics are often manually crafted by much tuning work, the recent deep models draw upon the power of neural networks to automate the algorithmic design for solving VRPs. The learned heuristics have shown an advantageous balance between computational efficiency and solution quality [5], [6], [7], [8].…”
Section: Introductionmentioning
confidence: 99%
“…Among these attempts, most of them focussed on leveraging more diverse instances from different distributions to the training data, which is supposed to force the model to learn more robust and generalisable features. Such augmentation of training data may be achieved by adversarial robustness [192], group distributionally robust optimisation [193], and hardness-adaptive CL [194]. Different from the above methods that learn to augment the training dataset, Bi et al [195] proposed to tackle the cross-distribution generalisation issue by knowledge distillation.…”
Section: Enhancing Generalisation Capabilitymentioning
confidence: 99%