Many applications affecting human lives rely on models that have come to be known under the umbrella of machine learning and artificial intelligence. These AI models are usually complicated mathematical functions that make decisions and predictions by mapping from an input space to an output space. Stakeholders are interested to know the rationales behind models' decisions; that understanding requires knowledge about models' functional behavior. We study this functional behavior in relation to the data used to create the models. On this topic, scholars have often assumed that models do not extrapolate, i.e., they learn from their training samples and process new input by interpolation. This assumption is questionable: we show that models extrapolate frequently; the extent of extrapolation varies and can be socially consequential. We demonstrate that extrapolation happens for a substantial portion of datasets more than one would consider reasonable. How can we trust models if we do not know whether they are extrapolating? Given a model trained to recommend clinical procedures for patients, can we trust the recommendation when the model considers a patient older or younger than all the samples in the training set? If the training set is mostly Whites, how do we measure the extent to which we can trust its recommendations about Black and Hispanic patients? How do we know if extrapolation is significant for a given sample if we do not compare the sample to the training data? Which dimension (race, gender, or age) does extrapolation happen? Even if a model is trained on people of all races, it still may extrapolate in significant ways related to race. So the leading question is, to what extent can we trust AI models when they process inputs that fall outside their training set? This paper investigates several social applications of AI, showing how models extrapolate without notice. The difficulty of auditing is that datasets have many features, thus hard to review individually. To solve this problem, we use a systematic method of geometric analysis. We also look at different sub-spaces of extrapolation for specific individuals subject to AI models and report how these extrapolations can be interpreted, not mathematically, but from a humanistic point of view. We recommend that AI pipelines include this auditing approach. If extrapolation is statistically excessive, the model's output should be sent for review by human experts and provide information about the extent and dimensions of extrapolation.CCS Concepts: • Computing methodologies → Machine learning; • Social and professional topics → Governmental regulations; • Theory of computation → Randomness, geometry and discrete structures.