Epidemiologists, like other scientists, make mistakes. In our search for learning about causal links between environmental or genetic factors and diseases we study associations as they occur in populations that provide access to data. We know that these associations have many reasons; most of these are not the causes we seek. Much attention (too much) has been given to random errors and P-values in decision making. Most errors are related to bias and confounding, not insufficient sample size. Our findings are not true or false, rather more or less plausible. Our objectives should not be to accept or reject often rather meaningless null-hypotheses.
Add to this the problem of publication bias. Results are published or not published partly as a function of the results we obtain. Censoring is driven by the investigators, the editors, the peer reviewers and others. This problem is real and well documented.
To reduce these mistakes attempts have been made to regulate research in addition to what ethic science committees impose such asstandardizing reporting following so-called STROBE criteria, and now prior registration of observational study protocols like we know it for randomized trials.
Much is as stake for epidemiologists and IEA has been reluctant in supporting these restrictions. We do support more reasonable use of P-values (less use). Much of the present misery is a result of inappropriate use of tests of significance.
What is at stake in the long run is whether epidemiology should be an explorative science allowed to make mistakes, even many mistakes, in order to inspire new hypotheses or theories. We do not believe this is best done by setting up bureaucratic obstacles that will make reporting more boring and limit quick search for association inspired by novel hypotheses, not yet present when data were collected.
We should not accept to reduce epidemiology to studies that only evaluate rather well established hypotheses in large, expensive designs such as randomized trials. We should allow creative and spontaneously inspired use of data. We do believe much of our knowledge stem from such studies.
We know this will increase our error rate, but reporting research results not as positive or negative, but rather how much they modify prior beliefs will provide a more sober conclusion. We have to de-program readers and editors to realize that interpretation of research results can not be dichotomized to accept or reject null-hypotheses or any other hypotheses for that matter.
We understand there is a problem related to the fact that many epidemiologic studies reach public media where they are often over interpreted or downplayed if they support or contradict the biases of journalists or editors.
We do accept that some epidemiologic hypotheses address not only important public health problems, but also problems where important economic interests are at stake. When this is the situation we do agree that great care should be taken to make sure the study is as large as possible and designed, reported and interpreted as rigorously as possible according to the best of our knowledge. But if we require such a standard for all epidemiology we take away a large of creative use of our discipline and we will miss many opportunities to get new information.We will insist that epidemiologists have the right to propose hypotheses that did not come from any other sources, or that were not necessarily conceived a priori.
The desire to make the route to publication more complicated is partly driven by industrial interests, not at all related to public health, as well as to a serious concern for not alarming the public unnecessarily. This concern is legitimate and unjustified limitation to use of certain drugs or chemicals may not only harm the producer of the product, but also deprive people from a useful technology.
Restricting the use of significance testing and providing documentation for how much a given set of data modifies the prior belief would be a much more effective step towards sensible and responsible reporting of epidemiologic results than attempts to fit studies into one ‘size’.
We think that the editor of Lancet provided a balanced tone discussing whether large scale observational studies shared be registered in a public data base as it is now done for randomized trials. But this will only demonstrate what we already know; these large data sources are frequently being used for purposes that were not part of the original protocol. We do encourage epidemiologists to look for existing data when they come with a new hypothesis. We do not want to bother the public unnecessary if data already exist, nor draw upon limited research funds to carry out redundant new data. Creative use of existing data is what we should encourage, not limit.
Doing Epidemiologic research is at present heavily regulated and history shows that some degree of regulation is necessary but regulations also come with a price tag. Small steps may seem rather innocent but taken together they will reduce or eliminate studies that should be done or should have been done much earlier. After all ethical concerns in public health are more frequently related to the research that was not done than to the research we do.
We hope this discussion will continue when we meet at the IEA/ WCE in Edinburgh 2011.
Jorn Olsen, Neil Pearce, Cesar Victora, Shah Ebrahim
- Should protocols for observational research be registered (editoral) Lancet 2010;375:348
- Blair et al. Environmental Health Perspectives 2009;117:1809-1813