Reliability refers to a measure's consistency. A measure's reliability rests on the extent to which it yields the same result on repeated trials. Reliability is of significance in that, without it, the results of research lack replicability, which is a foundation of the scientific method. Reliability cannot be calculated exactly. As a result, it is a correlation of an item, scale, or instrument with a hypothetical one that truly measures what it is supposed to. That is what is meant by calculations of reliability being estimates of reliability. There are many ways to calculate estimates, and each provides a different view of reliability. Although there are many ways to estimate reliability, four ways are particularly common (Carmines and Zeller 1991; Fink 1995). First, internal consistency is a way to provide a reliability estimation that is based on grouping questions in a questionnaire that measure the same concept. The most common way to measure internal consistency is to use Cronbach's Alpha which, in brief, splits a measure's questions on in every possible way and computes correlation values for all of them. As with any correlation, the closer to 1 it is, the more internally reliable the measure is estimated to be. Second, split-half reliability provides reliability estimation that is based on the correlation of two equivalent forms of the scale; the Spearman-Brown coefficient typically is used to determine this type of estimate. Third, testretest reliability provides an estimation based on the correlation between multiple administrations of the same measure (or parts of it); and this method also makes use of the Spearman Brown coefficient. Lastly, inter-rater reliability is an estimation based on correlations of scores between two or more raters who answer the same measure (or parts of it). These four typical methods represent different meanings of reliability, and some studies use multiple approaches depending on what they are trying to estimate.