Three insightful papers in this issue (1-3) highlight different types of biases and errors in obesity research and related fields of investigation. George et al.(1) review a wide variety of errors and biases in misuse of statistical methods, misconceptions in scientific inference, improper or inadequate consideration of multiplicity, and suboptimal or selective reporting. Johns et al. (2) provide a metaepidemiological assessment of data from control groups from 29 randomized trials of obesity and show that participants in inactive control groups spuriously seem to lose weight after 12 months-an extra reason why non-controlled studies should be less trusted. Fontaine et al. (3) discuss the subtleties of placebo effects and how placebo-related factors may cause an effect even among people who know that they are receiving a placebo.Errors and biases are by no means unique to obesity research. They pervade all fields of scientific investigation (4). However, obesity and related fields, in particular nutrition research, have received an extra share of attention on these issues. There are probably many reasons for this, including the unquestionable importance of the subject matter; the extra interest for this topic (especially the "bad news") by mass media (5); the refutations of scores of epidemiological associations especially in nutrition research (6); the large volume of papers published; the persistence of some myths based on no evidence (7,8) which distort further investigation by offering a wrong starting point; the routine use of measurement tools with high error rates and serious biases (e.g., nutrition questionnaires based on selfreporting) (9); the relative lack of transparency (e.g., no prespecified protocols, registration, data sharing); and the recalcitrance of some segments of this literature to adopt standard practices (e.g., proper adjustment for multiple comparisons and more stringent statistical thresholds) that might have saved several embarrassments.Some fields in observational research in particular have seen the massive propagation of the practice of "salami slicing," where dozens of investigators co-author papers practically reporting a single association at a time and where each paper consists of a tiny part of a much larger data dredging agenda from the same data set "gold mine." In observational research, some data sets have already published hundreds of papers (10) instead of a couple dozen that would have been more appropriate, e.g., if they had used exposure-wide association approaches that can evaluate dozens and hundreds of exposures within the same analysis (11,12). Salami slicing is also seen increasingly in some randomized trials of nutrition and obesity, where multiple publications of outcomes and analyses stem from the same trial. For example, a search in PubMed (December 17, 2015) with PREDIMED [ti] or "PREDIMED Study Investigators" retrieves 95 papers. Even though this is an excellent, pivotal randomized trial, one can question how much data dredging a single trial can tolerat...