Recent decades have seen an increasing importance of large-scale ecological research, driven by increased awareness of the global influence of human activities on the biosphere. Such research requires species observation data covering many years, large areas and a broad range of taxonomic groups. As such data sets often cover small areas, and have been collected using varying methods, they can only be combined in a single analysis if they are made available at the same location and translated into a single format. Over the past decade, catalysed by the growth of the Internet, various technologies for data dissemination and data integration have been developed and applied in projects such as the Global Biodiversity Information Facility, the Knowledge Network for Biocomplexity, BioCASE and the British National Biodiversity Network (NBN). In the Netherlands, data are now made available from the National Database of Flora and Fauna (NDFF), which currently contains approximately 40 million observation records covering a broad variety of species. The NDFF uses a standardised, semantically integrated data model to combine effectively species observation data of various kinds. In this paper, we evaluate this approach and the NDFF data model, by comparison with Darwin Core, Access to Biological Collections Data (ABCD) and the Recorder 2000 model used by the NBN. We conclude that the high degree of standardisation in the NDFF data model has led to somewhat increased cost in data conversion, but also to improved semantic integration and ease-ofuse of species observation data. Together with the relative simplicity, completeness and flexibility of the model, this enables effective reuse of species observations in a user-friendly manner.