In this chapter, we provide an overview of this book on methods for analyzing large neuroimaging datasets. There is a recognition in the field of neuroimaging that sample size must drastically increase to achieve adequate statistical power and reproducibility. Several large neuroimaging studies and databases, such as OpenNeuro and the Adolescent Brain and Cognitive Development project have emerged, offering open access to vast amounts of data. However, there is a dearth of practical guidance for working with large neuroimaging datasets, a deficit that this book seeks to address. With the emphasis on providing hands-on instruction, chapters contain worked examples using open-access data. The book is organized as follows. In Section 1, the reader is shown how to access and download large datasets, and how to compute at scale. In Section 2, chapters cover best practices for working with large data, including how to build reproducible pipelines, the use of Git for collaboration and how to make EEG and fMRI data sharable and standardized. In Section 3, chapters describe how to do structural and functional preprocessing data at scale, incorporating practical advice on potential trade-offs of standardization. In Section 4, chapters describe various toolboxes for interrogating large neuroimaging datasets, including those based on machine learning and deep learning approaches. These methods can be applied to connectomic and region-of-interest data. Finally, the book contains a glossary of useful terms.