Self-reports may be affected by two primary sources of distortion, i.e., content-related (CRD) and content-unrelated (CUD) distortions. CRD and CUD, however, may co-vary, and similar detection strategies have been used to capture both. Thus, we hypothesized that a scale developed to detect random responding-arguably, one the most evident examples of CUDwould likely be sensitive to both CUD and, albeit to a lesser extent, CRD. Study 1 (N = 1,901) empirically tested this hypothesis by developing a random responding scale (RRS) for the recently introduced, Inventory of Problems-29 (IOP-29; Viglione et al., 2017), and by testing it with both experimental feigners and honest controls. Results supported our hypothesis and offered some insight on how to pull apart CRD-from CUD-related variance. Study 2 (N = 700) then evaluated whether our RRS would perform similarly well with data from human participants instructed to respond at random versus computer-generated, random data. Interestingly, the sensitivity of our RRS dropped dramatically when considering the data from human participants. Together with the results of additional analyses inspecting the patterns of responses provided by our human random responders, these findings thus posed a major question, is humans' random responding really random?