State-of-the-art semantic parsers rely on autoregressive decoding, emitting one symbol at a time. When tested against complex databases that are unobserved at training time (zeroshot), the parser often struggles to select the correct set of database constants in the new database, due to the local nature of decoding. In this work, we propose a semantic parser that globally reasons about the structure of the output query to make a more contextuallyinformed selection of database constants. We use message-passing through a graph neural network to softly select a subset of database constants for the output query, conditioned on the question. Moreover, we train a model to rank queries based on the global alignment of database constants to question words. We apply our techniques to the current state-ofthe-art model for SPIDER, a zero-shot semantic parsing dataset with complex databases, increasing accuracy from 39.4% to 47.4%.