From as early as the 1930s, astronomers have tried to quantify the statistical nature of the evolution and large-scale structure of galaxies by studying their luminosity distribution as a function of redshift -known as the galaxy luminosity function (LF). Accurately constructing the LF remains a popular and yet tricky pursuit in modern observational cosmology where the presence of observational selection effects due to e.g. detection thresholds in apparent magnitude, colour, surface brightness or some combination thereof can render any given galaxy survey incomplete and thus introduce bias into the LF.Over the last 70 years there have been numerous sophisticated statistical approaches devised to tackle these issues; all have advantages -but not one is perfect. This review takes a broad historical look at the key statistical tools that have been developed over this period, discussing their relative merits and highlighting any significant extensions and modifications. In addition, the more generalised methods that have emerged within the last few years are examined. These methods propose a more rigorous statistical framework within which to determine the LF compared to some of the more traditional methods. I also look at how photometric redshift estimations are being incorporated into the LF methodology as well as considering the construction of bivariate LFs. Finally, I review the ongoing development of completeness estimators which test some of the fundamental assumptions going into LF estimators and can be powerful probes of any residual systematic effects inherent magnitude-redshift data.