How can practitioners and clinicians know if a prediction model trained at a different institution can be safely used on their patient population? There is a large body of evidence showing that small changes in the distribution of the covariates used by prediction models may cause them to fail when deployed to new settings. This specific kind of dataset shift, known as covariate shift, is a central challenge to implementing existing prediction models in new healthcare environments. One solution is to collect additional labels in the target population and then fine tune the prediction model to adapt it to the characteristics of the new healthcare setting, which is often referred to as localization. However, collecting new labels can be expensive and time-consuming. To address these issues, we recast the core problem of model transportation in terms of uncertainty quantification, which allows one to know when a model trained in one setting may be safely used in a new healthcare environment of interest. Using methods from conformal prediction, we show how to transport models safely between different settings in the presence of covariate shift, even when all one has access to are covariates from the new setting of interest (e.g. no new labels). Using this approach, the model returns a prediction set that quantifies its uncertainty and is guaranteed to contain the correct label with a user-specified probability (e.g. 90%), a property that is also known as coverage. We show that a weighted conformal inference procedure based on density ratio estimation between the source and target populations can produce prediction sets with the correct level of coverage on real-world data. This allows users to know if a model’s predictions can be trusted on their population without the need to collect new labeled data.