Sharing scientific analyses via workflows has great potential to improve the reproducibility of science as well as communicating research results. This is particularly useful for trans-disciplinary research fields such as biodiversity - ecosystem functioning (BEF), where syntheses need to merge data ranging from genes to the biosphere. Here we argue that enabling simplicity in the very beginning of workflows, at the point of data description and merging, offers huge potentials in reducing workflow complexity and in fostering data and workflow reuse. We illustrate our points using a typical analysis in BEF research, the aggregation of carbon pools in a forest ecosystem. We introduce indicators for the complexity of workflow components including data sources. We show that workflow complexity decreases exponentially during the course of the analysis and that simple text-based measures help to identify bottlenecks in a workflow and group workflow components according to tasks. We thus suggest that focusing on simplifying steps of data aggregation and imputation will greatly improve workflow readability and thus reproducibility. Providing feedback to data providers about the complexity of their datasets may help to produce better focused data that can be used more easily in further studies. At the same time, providing feedback about the complexity of workflow components may help to exchange shorter and simpler workflows for easier reuse. Additionally, identifying repetitive tasks informs software development in providing automated solutions. We discuss current initiatives in software and script development that implement quality control for simplicity and social tools of script valuation. Taken together we argue that focusing on simplifying data sources and workflow components will improve and accelerate data and workflow reuse and simplify the reproducibility of data-driven science.