“…Such questions include searching for specific subgraph structures, investigating the connectivity of specific neuron cell types or categories, proofreading connectomes for accuracy, and generating summary statistics on the graph as a whole [12,13,3,14,15,16,10]. Many graphs generated by the connectomics community in recent years have spanned multiple gigabytes of hard drive space [2,10], rendering conventional graph toolkits, such as the common NetworkX library [17] (or its counterparts in other programming languages) under-powered to address the needs of the scientific community. These tools, which often require all graph data to be stored in RAM, would require impractically expensive compute hardware in order to run fully in-memory, and would require impractically long timelines in order to run while swapping data from memory to disk.…”