In a critical review of the heuristics used to deal with zero word frequencies, we show that four are suboptimal, one is good, and one may be acceptable. The four suboptimal strategies are discarding words with zero frequencies, giving words with zero frequencies a very low frequency, adding 1 to the frequency per million, and making use of the Good-Turing algorithm. The good algorithm is the Laplace transformation, which consists of adding 1 to each frequency count and increasing the total corpus size by the number of word types observed. A strategy that may be acceptable is to guess the frequency of absent words on the basis of other corpora and then increasing the total corpus size by the estimated summed frequency of the missing words. A comparison with the lexical decision times of the English Lexicon Project and the British Lexicon Project suggests that the Laplace transformation gives the most useful estimates (in addition to being easy to calculate). Therefore, we recommend it to researchers.Keywords Word frequency . Laplace transformation . Good-Turing algorithm . Zero frequency One of the thorny issues in word recognition studies arises when researchers want to use words not present in their preferred word frequency list. Although it is tempting to assign such words a frequency of 0, this creates problems when one needs the logarithms of the frequencies, because the logarithm of 0 goes to minus infinity and, therefore, is not given by most calculators or software packages. As usual, when confronted with this type of mathematical nuisance, psychology researchers have developed a number of heuristics, which are passed on from one generation to the next without much justification. The practice commonly elicits probing questions from new, critical students, but they rapidly learn to adapt when they realize that finding answers is not trivial and risks detracting from their real research. One would expect the providers of word frequency lists to give some guidance, but to our knowledge, this has not happened so far.It might be argued that the problem of zero frequencies is likely to disappear in the near future, given that word frequency measures are calculated on increasingly large collections of materials. Indeed, one would not expect an interesting word to be absent from a corpus of more than one hundred billion words, such as the Google Books corpus (Michel et al., 2011). This is true, but analyses have indicated that frequency measures based on such large (Internet-based) corpora are not the best to predict word-processing times in psycholinguistic studies. More variance in word-processing performance is accounted for by frequency estimates from smaller corpora that are more representative of the language that the participants of psychology experiments have been exposed to (Brysbaert, Keuleers, & New, 2011). Although frequency measures based on very large corpora provide estimates for all words, they do not provide very good estimates.There are two reasons why word frequencies from very large ...