Both genetic drift and natural selection cause the frequencies of alleles in a population to vary over time.Discriminating between these two evolutionary forces, based on a time series of samples from a population, remains an outstanding problem with increasing relevance to modern data sets. Even in the idealized situation when the sampled locus is independent of all other loci this problem is difficult to solve, especially when the size of the population from which the samples are drawn is unknown. A standard χ 2 -based likelihood ratio test was previously proposed to address this problem. Here we show that the χ 2 test of selection substantially underestimates the probability of Type I error, leading to more false positives than indicated by its P -value, especially at stringent P -values. We introduce two methods to correct this bias. The empirical likelihood ratio test (ELRT) rejects neutrality when the likelihood ratio statistic falls in the tail of the empirical distribution obtained under the most likely neutral population size. The frequency increment test (FIT) rejects neutrality if the distribution of normalized allele frequency increments exhibits a mean that deviates significantly from zero. We characterize the statistical power of these two tests for selection, and we apply them to three experimental data sets. We demonstrate that both ELRT and FIT have power to detect selection in practical parameter regimes, such as those encountered in microbial evolution experiments. Our analysis applies to a single diallelic locus, assumed independent of all other loci, which is most relevant to full-genome selection scans in sexual organisms, and also to evolution experiments in asexual organisms as long as clonal interference is weak. Different techniques will be required to detect selection in time series of co-segregating linked loci.