As AI (artificial intelligence) systems grow more sophisticated, some propose using language models to replace human participants in psychological research. However, this letter argues that positioning language models as stand-ins for human minds fundamentally mischaracterizes their capabilities. Unlike humans, language models lack inner experiences, singular identities, and unified cognition. Rather, they are multifaceted simulation tools capable of mimicking diverse behaviors without possessing intrinsic intentions or motivations. Treating responses as proxies for human judgment risks significant anthropomorphism. Furthermore, model outputs do not reliably indicate average human opinions due to tuning deviations and other distortions. Overall, language models can act as valuable role-playing agents, but should not be equated with human minds.