Artificial intelligence (AI) classification of images has achieved considerable promise for dermatology applications for photography, dermoscopy, and pathology for a number of years. However, a major challenge in these arenas is the curation of well-labeled data sets that can be shared publicly due to regulatory concerns as well as the common practice of using retrospective data sets that have not been prospectively consented for research use. AI algorithm performance is directly related to the size and breadth of the data used for training, so methodology that expands the potential size and diversity of inputs and evaluates the generalizability of resultant methods should be enthusiastically explored.The study in this issue of JAMA Dermatology by Haggenmüller et al 1 is an example of a proof-of-concept study that advances the field of dermatology AI by taking the challenge of barriers to public sharing of data and proposing a methodology for overcoming it by using federated learning (FL). In the study, they have curated prospectively acquired histopathologic whole slide image data from 6 hospitals for a binary classification task around invasive melanoma vs nevi. They compared the FL setting with the 2 most widely used training settings: centralized and ensemble training. In the centralized setting, the data from multiple sources are pooled together under a single data set to train a classification model. The authors rightly argue that bringing such data sets together is challenging due to data sharing and use permission issues. A potential solution to this problem is using an ensemble training approach, where models are trained separately at different centers using their own data and then combined through model ensembling. However, an important limitation of this approach is that each model is trained on a relatively small subset of the overall data set, making it susceptible to overfitting to the characteristics of the specific data site it was trained on. Using an ensemble model increases the computational complexity during inference because it consists of multiple models. In addition, ensemble models also present challenges regarding their interpretability, given that their outcomes result from a blend of predictions made by multiple individual models. The authors examine whether using an FL model may address challenges in both approaches. In FL, each contributing site independently trains a common model using only their respective data, while also communicating with a centralized server or directly with the other sites, to send and get periodic updates on the model parameters. In the study by Haggenmüller et al, 1 the authors used a FL approach, where models from each site are collected and merged at a centralized server. The merged model is obtained by