The quality of information that informs decisions in expert domains such as law enforcement and national security often requires assessment based on meta-informational cues such as source reliability and information credibility. Across two experiments with intelligence analysts (n = 74) and non-experts (n = 175), participants rated the accuracy, informativeness, trustworthiness and usefulness of information varying in source reliability and information credibility conveyed using the Admiralty Code, an information evaluation system widely used in the defence and security domain since the 1940s. Accuracy, informativeness, and likelihood of use were elicited as repeated measures to examine intra-individual reliability. Across experiments, intra-individual reliability was higher when levels of source reliability and information credibility were the same compared to when opposing (one low, one high). In Experiment 2, unreliability was associated with worse performance on the Cognitive Reflection Test. Results also show that trustworthiness ratings were more dependent on source reliability than information credibility. Finally, the likelihood of using information was consistently predicted by accuracy ratings, and not by judged informativeness or trustworthiness. Where possible, samples were diverse in their demographics. The current findings call into question the ability of experts and novices to use information evaluation systems reliably.