Kelkar et al 1 eloquently described many of the real potential dangers of applying artificial intelligence (AI) to the routine care of patients diagnosed with cancer. I worry that their crucial message will be lost in the arcane terms they use to describe the effect AI might have on the sacred principles of clinical ethics: human dignity, nonmaleficence, patient autonomy, and justice, which includes health equity.As examples, the authors discuss algorithm autonomy, avoidance of humanoid interfaces, perceived information asymmetry gaps, obfuscating decision-making rationale, data absenteeism, technology tachyphylaxis, uncanny valley, corrective justice, epistemology of AI, and establishing an iterative and inclusive process.