Query-based open-domain NLP tasks require information synthesis from long and diverse web results.Current approaches extractively select portions of web text as input to Sequence-to-Sequence models using methods such as TF-IDF ranking. We propose constructing a local graph structured knowledge base for each query, which compresses the web search information and reduces redundancy. We show that by linearizing the graph into a structured input sequence, models can encode the graph representations within a standard Sequence-to-Sequence setting. For two generative tasks with very long text input, long-form question answering and multidocument summarization, feeding graph representations as input can achieve better performance than using retrieved text portions. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In ACL. Danqi Chen, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2013. Learning new facts from knowledge bases with neural tensor networks and semantic word vectors. arXiv preprint arXiv:1301.3618. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723. Kevin Clark and Christopher D Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. arXiv preprint arXiv:1609.08667. Kevin Clark and Christopher D Manning. 2016b. Improving coreference resolution by learning entitylevel distributed representations. arXiv preprint arXiv:1606.01323. . 2017. Kbqa: learning question answering over qa corpora and knowledge bases. Proceedings of the VLDB Endowment, 10(5):565-576. . 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.