Recent work in graph models has found that probabilistic hyperedge replacement grammars (HRGs) can be extracted from graphs and used to generate new random graphs with graph properties and substructures close to the original. In this paper, we show how to add latent variables to the model, trained using Expectation-Maximization, to generate still be er graphs, that is, ones that generalize be er to the test data. We evaluate the new method by separating training and test graphs, building the model on the former and measuring the likelihood of the la er, as a more stringent test of how well the model can generalize to new graphs. On this metric, we nd that our latent-variable HRGs consistently outperform several existing graph models and provide interesting insights into the building blocks of real world networks.